In [1]:
import math
import folium
import numpy as np
import pandas as pd


import geopandas as gpd
from shapely.geometry import Point, Polygon, MultiPolygon
from folium.plugins import PolyLineTextPath
from geopy.distance import geodesic

import branca.colormap as cm

import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns

import openmeteo_requests
import requests_cache
from retry_requests import retry

from xgboost import XGBRegressor
from sklearn.metrics import mean_absolute_error 
from sklearn.model_selection import GroupKFold, GridSearchCV, cross_val_score, cross_validate, train_test_split
from sklearn.compose import TransformedTargetRegressor

from IPython.display import display, HTML
In [2]:
%load_ext autoreload
%autoreload 2
In [3]:
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
In [4]:
pd.set_option('display.max_columns', None)

Executive Summary¶

This notebook demonstrates an in-depth analysis of bike-sharing trips in Munich throughout 2023, aiming to uncover patterns and actionable insights from seemingly ordinary bike usage data. The analysis combines classical exploratory techniques visualising the results of the analysis in different ways with a machine learning model that captures interdependencies between variables derived from the dataset and predicts a target variable on previously unseen data. The key conclusions of the analysis align with common sense and practical expectations.

Key Highlights:

  • Trip activity is concentrated in the city centre, with most trips starting and ending within several central polygons into which the city area is divided.
  • Distributions of trip duration and distance follow expected patterns — longer trips are increasingly rare, in line with the intended use case of short urban journeys.
  • Trip activity exhibits various seasonal patterns and can be strongly influenced by external factors such as public transport strikes.
  • Under conservative assumptions, it is possible to reliably reconstruct some trip chains made by the same bike — even without explicit bike identifiers.
  • Based on the reconstructed trip chains, idle time can be estimated in some cases. Its distribution agrees with common sense, and detailed analysis of the outliers may offer actionable insights.
  • Simple features derived from trip and weather data allow for accurate prediction of trip activity, as shown by fairly good model performance on unseen data.

Data Loading and Cleaning¶

The goal of this section is to have a look at the data, ensure correct data format and type (pandas dtype), do plausibility checks and in general make sure that we can rely on the data in the subsequent sections.

In [5]:
data = pd.read_csv('MVG_Rad_Fahrten_2023.csv', sep=';', decimal=',')
data
/var/folders/0j/hc4b4c153dzfbvkzcxfbtjv80000gn/T/ipykernel_1324/76297985.py:1: DtypeWarning: Columns (7) have mixed types. Specify dtype option on import or set low_memory=False.
  data = pd.read_csv('MVG_Rad_Fahrten_2023.csv', sep=';', decimal=',')
Out[5]:
Row STARTTIME ENDTIME STARTLAT STARTLON ENDLAT ENDLON RENTAL_IS_STATION RENTAL_STATION_NAME RETURN_IS_STATION RETURN_STATION_NAME
0 1 2023-01-01 00:26 2023-01-01 00:51 48.13795 11.54569 48.16123 11.55782 0 1 Barbarastr
1 2 2023-01-01 00:30 2023-01-01 00:42 48.12903 11.54431 48.14797 11.53445 0 0
2 3 2023-01-01 00:32 2023-01-01 00:45 48.16841 11.55566 48.16467 11.57649 0 0
3 4 2023-01-01 00:34 2023-01-01 00:46 48.16843 11.55567 48.16464 11.57648 0 0
4 5 2023-01-01 00:35 2023-01-01 00:51 48.17104 11.54878 48.16243 11.53007 0 0
... ... ... ... ... ... ... ... ... ... ... ...
710101 710102 2023-12-31 23:52 2023-12-31 23:57 48.16719 11.55854 48.16917 11.55547 0 0
710102 710103 2023-12-31 23:52 2024-01-01 00:07 48.17061 11.57391 48.17334 11.55952 0 0
710103 710104 2023-12-31 23:53 2023-12-31 23:57 48.14131 11.56144 48.14094 11.56044 0 0
710104 710105 2023-12-31 23:54 2023-12-31 23:59 48.12353 11.54494 48.12674 11.54758 0 0
710105 710106 2023-12-31 23:54 2024-01-01 00:06 48.12376 11.54871 48.12376 11.54871 1 Kreisverwaltungsreferat 1 Kreisverwaltungsreferat

710106 rows × 11 columns

The dataset contains detailed information about bike trips, with the following key variables:

  • Trip timing: Start and end times of each journey
  • Location data: Coordinates (latitude/longitude) for both start and end points
  • Station information: Names of rental and return stations (when applicable)
In [6]:
# column names contain trailing spaces
data.columns = data.columns.str.strip()
# drop the row column, but duplicate index values in a separate column. This will make some analysis more convenient
data.drop(columns=['Row'], inplace=True)
data.reset_index(names='trip_index', inplace=True)
data
Out[6]:
trip_index STARTTIME ENDTIME STARTLAT STARTLON ENDLAT ENDLON RENTAL_IS_STATION RENTAL_STATION_NAME RETURN_IS_STATION RETURN_STATION_NAME
0 0 2023-01-01 00:26 2023-01-01 00:51 48.13795 11.54569 48.16123 11.55782 0 1 Barbarastr
1 1 2023-01-01 00:30 2023-01-01 00:42 48.12903 11.54431 48.14797 11.53445 0 0
2 2 2023-01-01 00:32 2023-01-01 00:45 48.16841 11.55566 48.16467 11.57649 0 0
3 3 2023-01-01 00:34 2023-01-01 00:46 48.16843 11.55567 48.16464 11.57648 0 0
4 4 2023-01-01 00:35 2023-01-01 00:51 48.17104 11.54878 48.16243 11.53007 0 0
... ... ... ... ... ... ... ... ... ... ... ...
710101 710101 2023-12-31 23:52 2023-12-31 23:57 48.16719 11.55854 48.16917 11.55547 0 0
710102 710102 2023-12-31 23:52 2024-01-01 00:07 48.17061 11.57391 48.17334 11.55952 0 0
710103 710103 2023-12-31 23:53 2023-12-31 23:57 48.14131 11.56144 48.14094 11.56044 0 0
710104 710104 2023-12-31 23:54 2023-12-31 23:59 48.12353 11.54494 48.12674 11.54758 0 0
710105 710105 2023-12-31 23:54 2024-01-01 00:06 48.12376 11.54871 48.12376 11.54871 1 Kreisverwaltungsreferat 1 Kreisverwaltungsreferat

710106 rows × 11 columns

In [7]:
# Convert STARTTIME and ENDTIME to datetime64. dtype of the rest of the columns is already correct
for col in ['STARTTIME', 'ENDTIME']:
    data[col] = pd.to_datetime(data[col])
data.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 710106 entries, 0 to 710105
Data columns (total 11 columns):
 #   Column               Non-Null Count   Dtype         
---  ------               --------------   -----         
 0   trip_index           710106 non-null  int64         
 1   STARTTIME            710106 non-null  datetime64[ns]
 2   ENDTIME              710106 non-null  datetime64[ns]
 3   STARTLAT             710106 non-null  float64       
 4   STARTLON             710106 non-null  float64       
 5   ENDLAT               710106 non-null  float64       
 6   ENDLON               710106 non-null  float64       
 7   RENTAL_IS_STATION    710106 non-null  object        
 8   RENTAL_STATION_NAME  710106 non-null  object        
 9   RETURN_IS_STATION    710106 non-null  object        
 10  RETURN_STATION_NAME  710106 non-null  object        
dtypes: datetime64[ns](2), float64(4), int64(1), object(4)
memory usage: 59.6+ MB
In [8]:
# Values in these columns also contain trailing spaces
for col in ['RENTAL_IS_STATION', 'RENTAL_STATION_NAME', 'RETURN_IS_STATION', 'RETURN_STATION_NAME']:
    data[col] = data[col].str.strip()
In [9]:
data.query('RENTAL_IS_STATION.isna()')['RENTAL_STATION_NAME'].value_counts(dropna=False)
data.query('RETURN_IS_STATION.isna()')['RETURN_STATION_NAME'].value_counts(dropna=False)

# RENTAL_IS_STATION is not reliable since it can be NA when the trip starts at a station.
# So let's just disregard RENTAL_IS_STATION and RETURN_IS_STATION and drop them. 
# Whether the trip started or finished at a station can be concluded from RENTAL_STATION_NAME and RETURN_STATION_NAME.

data.drop(columns=['RETURN_IS_STATION', 'RENTAL_IS_STATION'], inplace=True)
Out[9]:
RENTAL_STATION_NAME
                                     51366
Sandstraße                             327
TUM Arcisstraße                        250
Hauptbahnhof Nord                      231
Olympiazentrum                         216
                                     ...  
Waldsiedlung Faistenhaar                 1
Ferd.-Kobell-Straße Haar                 1
Ottobrunner Straße Faistenhaar           1
S-Bahnhof Wächterhof                     1
Mallertshoffener Unterschleißheim        1
Name: count, Length: 315, dtype: int64
Out[9]:
Series([], Name: count, dtype: int64)
In [10]:
# Let's make sure each station has the same coordinates in the entire dataset
all_stations_with_coord = pd.concat([data[['RENTAL_STATION_NAME', 'STARTLAT', 'STARTLON']].rename(columns={'RENTAL_STATION_NAME': 'station', 
                                                                                 'STARTLAT': 'lat', 
                                                                                 'STARTLON': 'lon'}), 
           data[['RETURN_STATION_NAME', 'ENDLAT', 'ENDLON']].rename(columns={'RETURN_STATION_NAME': 'station', 
                                                                                 'ENDLAT': 'lat', 
                                                                                 'ENDLON': 'lon'})], axis=0)
std_deviations = all_stations_with_coord.groupby('station').std()
std_deviations.head()
std_deviations.query('lat != 0 | lon != 0')
# Indeed, only for the. points that are not stations, the standard deviation of their coordinates is different from 0.
# Otherwise it is zero, so the coordinates of all stations are consistent.
Out[10]:
lat lon
station
5.698372 3.448503
AGROB Nord Ismaning 0.000000 0.000000
AGROB Süd Ismaning 0.000000 0.000000
Ackermannstraße 0.000000 0.000000
Ahornring Taufkirchen 0.000000 0.000000
Out[10]:
lat lon
station
5.698372 3.448503
In [11]:
data.info()
data.describe()
# We have no NA values, but what is clearly implausible is some coordinates in all coordinate columns. Some of the are equal to 0, and some are even negative
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 710106 entries, 0 to 710105
Data columns (total 9 columns):
 #   Column               Non-Null Count   Dtype         
---  ------               --------------   -----         
 0   trip_index           710106 non-null  int64         
 1   STARTTIME            710106 non-null  datetime64[ns]
 2   ENDTIME              710106 non-null  datetime64[ns]
 3   STARTLAT             710106 non-null  float64       
 4   STARTLON             710106 non-null  float64       
 5   ENDLAT               710106 non-null  float64       
 6   ENDLON               710106 non-null  float64       
 7   RENTAL_STATION_NAME  710106 non-null  object        
 8   RETURN_STATION_NAME  710106 non-null  object        
dtypes: datetime64[ns](2), float64(4), int64(1), object(2)
memory usage: 48.8+ MB
Out[11]:
trip_index STARTTIME ENDTIME STARTLAT STARTLON ENDLAT ENDLON
count 710106.000000 710106 710106 710106.000000 710106.000000 710106.000000 710106.000000
mean 355052.500000 2023-07-03 20:01:02.147932928 2023-07-03 20:28:35.743255296 47.610000 11.772959 47.589505 11.720168
min 0.000000 2023-01-01 00:26:00 2023-01-01 00:42:00 0.000000 -71.178000 -55.973800 -99.259350
25% 177526.250000 2023-05-06 16:43:00 2023-05-06 17:17:15 48.128250 11.549040 48.128300 11.549040
50% 355052.500000 2023-07-04 19:35:00 2023-07-04 19:54:00 48.143190 11.567820 48.143190 11.567710
75% 532578.750000 2023-09-07 20:35:00 2023-09-07 21:04:00 48.159040 11.584500 48.159020 11.584430
max 710105.000000 2023-12-31 23:54:00 2024-01-01 16:00:00 53.094660 141.353220 53.094660 141.353220
std 204990.089464 NaN NaN 5.045196 3.141217 5.170640 3.034583
In [12]:
# We see that within 1.5% and 99%- quantiles the coordinates are roughly plausible, at least up to their integer parts.

data[['STARTLAT',	'STARTLON',	'ENDLAT', 'ENDLON']].quantile(0.015)
data[['STARTLAT',	'STARTLON',	'ENDLAT', 'ENDLON']].quantile(0.99)
Out[12]:
STARTLAT    48.03886
STARTLON    11.46081
ENDLAT      48.03864
ENDLON      11.45315
Name: 0.015, dtype: float64
Out[12]:
STARTLAT    48.24959
STARTLON    11.73105
ENDLAT      48.24959
ENDLON      11.72308
Name: 0.99, dtype: float64

After visual inspection of the data points on a map, the geographic boundaries of the area to be analysed were set as follows:

  • Latitude range: 47.9°N to 48.32°N
  • Longitude range: 11.2°E to 12.0°E

This rectangular area encompasses the entire city of Munich and its immediate neighbouring towns.

In [13]:
ranges = {'STARTLAT': (47.9, 48.32), 'ENDLAT': (47.9, 48.32), 'STARTLON': (11.2, 12), 'ENDLON': (11.2, 12)}

query_str = ' & '.join([f'{col} >= {low} & {col} <= {high}' for col, (low, high) in ranges.items()])
data_filtered = data.query(query_str).copy()
data_filtered
data_filtered[['STARTLAT',	'STARTLON',	'ENDLAT',	'ENDLON']].apply([min, max])
Out[13]:
trip_index STARTTIME ENDTIME STARTLAT STARTLON ENDLAT ENDLON RENTAL_STATION_NAME RETURN_STATION_NAME
0 0 2023-01-01 00:26:00 2023-01-01 00:51:00 48.13795 11.54569 48.16123 11.55782 Barbarastr
1 1 2023-01-01 00:30:00 2023-01-01 00:42:00 48.12903 11.54431 48.14797 11.53445
2 2 2023-01-01 00:32:00 2023-01-01 00:45:00 48.16841 11.55566 48.16467 11.57649
3 3 2023-01-01 00:34:00 2023-01-01 00:46:00 48.16843 11.55567 48.16464 11.57648
4 4 2023-01-01 00:35:00 2023-01-01 00:51:00 48.17104 11.54878 48.16243 11.53007
... ... ... ... ... ... ... ... ... ...
710101 710101 2023-12-31 23:52:00 2023-12-31 23:57:00 48.16719 11.55854 48.16917 11.55547
710102 710102 2023-12-31 23:52:00 2024-01-01 00:07:00 48.17061 11.57391 48.17334 11.55952
710103 710103 2023-12-31 23:53:00 2023-12-31 23:57:00 48.14131 11.56144 48.14094 11.56044
710104 710104 2023-12-31 23:54:00 2023-12-31 23:59:00 48.12353 11.54494 48.12674 11.54758
710105 710105 2023-12-31 23:54:00 2024-01-01 00:06:00 48.12376 11.54871 48.12376 11.54871 Kreisverwaltungsreferat Kreisverwaltungsreferat

694657 rows × 9 columns

Out[13]:
STARTLAT STARTLON ENDLAT ENDLON
min 47.91573 11.20252 47.91573 11.20252
max 48.31310 11.95886 48.31479 11.95886

Previously we checked if all stations have consistent coordinates. Let us also ensure the opposite: whether the station names are available for all known station locations. In some cases they may be missing even though the coordinates correspond to a known station.

In [14]:
# First, get coordinates for all stations, n_trips will denote the number of trips that started at that station
station_coordinates = data_filtered.query('RENTAL_STATION_NAME != ""')[['RENTAL_STATION_NAME', 'STARTLAT', 'STARTLON']].groupby('RENTAL_STATION_NAME')\
                          .agg(latitude=pd.NamedAgg(column="STARTLAT", aggfunc="first"),
                              longitude=pd.NamedAgg(column="STARTLON", aggfunc="first"),
                              n_trips=pd.NamedAgg(column="STARTLAT", aggfunc="count"))
station_coordinates  = station_coordinates.reset_index().rename(columns={'RENTAL_STATION_NAME': 'STATION_NAME'})
station_coordinates
Out[14]:
STATION_NAME latitude longitude n_trips
0 AGROB Nord Ismaning 48.21102 11.66092 73
1 AGROB Süd Ismaning 48.20836 11.65885 151
2 Ackermannstraße 48.16824 11.56469 828
3 Ahornring Taufkirchen 48.04797 11.59898 74
4 Ainmillerstraße 48.15904 11.57756 707
... ... ... ... ...
326 Würmtalstraße Gräfelfing 48.11545 11.44003 163
327 ZHS Olympiazentrum 48.18079 11.54656 663
328 Zenettiplatz 48.12452 11.55557 1176
329 Zugspitzstraße Neuried 48.08800 11.46974 110
330 astopark 48.09174 11.28297 339

331 rows × 4 columns

In [15]:
# Do two joins on coordinates and copy the station name to  RENTAL_STATION_NAME and RETURN_STATION_NAME
#  Now we will have some NAs in new columns RENTAL_STATION_NAME	and RETURN_STATION_NAME (previously they were empty strings).
data_filtered = data_filtered.merge(
    station_coordinates[['STATION_NAME', 'latitude', 'longitude']],
    left_on=['STARTLAT', 'STARTLON'], right_on=['latitude', 'longitude'],
    how='left'
)
data_filtered['RENTAL_STATION_NAME'] = data_filtered['STATION_NAME']
data_filtered.drop(columns=['STATION_NAME', 'latitude', 'longitude'], inplace=True)

data_filtered = data_filtered.merge(
    station_coordinates[['STATION_NAME', 'latitude', 'longitude']],
    left_on=['ENDLAT', 'ENDLON'], right_on=['latitude', 'longitude'],
    how='left'
)
data_filtered['RETURN_STATION_NAME'] = data_filtered['STATION_NAME']
data_filtered.drop(columns=['STATION_NAME', 'latitude', 'longitude'], inplace=True)

data_filtered
Out[15]:
trip_index STARTTIME ENDTIME STARTLAT STARTLON ENDLAT ENDLON RENTAL_STATION_NAME RETURN_STATION_NAME
0 0 2023-01-01 00:26:00 2023-01-01 00:51:00 48.13795 11.54569 48.16123 11.55782 NaN Barbarastr
1 1 2023-01-01 00:30:00 2023-01-01 00:42:00 48.12903 11.54431 48.14797 11.53445 NaN NaN
2 2 2023-01-01 00:32:00 2023-01-01 00:45:00 48.16841 11.55566 48.16467 11.57649 NaN NaN
3 3 2023-01-01 00:34:00 2023-01-01 00:46:00 48.16843 11.55567 48.16464 11.57648 NaN NaN
4 4 2023-01-01 00:35:00 2023-01-01 00:51:00 48.17104 11.54878 48.16243 11.53007 NaN NaN
... ... ... ... ... ... ... ... ... ...
694652 710101 2023-12-31 23:52:00 2023-12-31 23:57:00 48.16719 11.55854 48.16917 11.55547 NaN NaN
694653 710102 2023-12-31 23:52:00 2024-01-01 00:07:00 48.17061 11.57391 48.17334 11.55952 NaN NaN
694654 710103 2023-12-31 23:53:00 2023-12-31 23:57:00 48.14131 11.56144 48.14094 11.56044 NaN NaN
694655 710104 2023-12-31 23:54:00 2023-12-31 23:59:00 48.12353 11.54494 48.12674 11.54758 NaN NaN
694656 710105 2023-12-31 23:54:00 2024-01-01 00:06:00 48.12376 11.54871 48.12376 11.54871 Kreisverwaltungsreferat Kreisverwaltungsreferat

694657 rows × 9 columns

In [16]:
# Resulting value counts for RENTAL_STATION_NAME and RETURN_STATION_NAME: as will be discussed later, most trips start and end outside stations
data_filtered['RENTAL_STATION_NAME'].value_counts(dropna=False)
data_filtered['RETURN_STATION_NAME'].value_counts(dropna=False)
Out[16]:
RENTAL_STATION_NAME
NaN                         538323
Sandstraße                    3476
TUM Arcisstraße               2623
Hauptbahnhof Nord             2601
Münchner Freiheit             2402
                             ...  
Tölzer Straße Otterloh          17
Domagkstraße Kirchheim          12
Am Sportpark Feldkirchen        11
Waldsiedlung Faistenhaar         9
Bogenstraße Waldbrunn            4
Name: count, Length: 332, dtype: int64
Out[16]:
RETURN_STATION_NAME
NaN                         571970
Sandstraße                    2966
Hauptbahnhof Nord             2456
TUM Arcisstraße               2134
Olympiazentrum                2091
                             ...  
Am Sportpark Feldkirchen        13
Ferd.-Kobell-Straße Haar        12
Parkplatz Grünwald              11
Waldsiedlung Faistenhaar         7
Bogenstraße Waldbrunn            3
Name: count, Length: 332, dtype: int64

Visualization: Trip Start and End Locations¶

This visualization displays a sample of locations where bike trips began and ended in 2023. Key features of the plot:

  • Marker size represents the frequency of trips at each location
  • Data is aggregated by location before sampling
  • Larger markers typically indicate station locations, where there is natural concentration of trip starts and ends
  • Note: Due to the sampling process, some high-traffic stations might not appear in this visualization

Use the zoom controls to explore specific areas in detail.

In [17]:
def plot_location_counts(data, latitude_column, longitude_column, marker_size, title, **kwargs):
    #  Aggregate the data by location
    agg_data = data[[latitude_column, longitude_column]].groupby([latitude_column, longitude_column]).size().reset_index(name='count')

    # Normalize the count to scale marker sizes
    max_count = agg_data['count'].max()
    agg_data['size'] = agg_data['count'] / max_count * marker_size  


    #  Create a Folium map
    m = folium.Map(location=[data[latitude_column].mean(), data[longitude_column].mean()], zoom_start=12)

    # Add markers to the map
    for i, row in agg_data.sample(**kwargs).iterrows():
        _ = folium.CircleMarker(
            location=[row[latitude_column], row[longitude_column]],
            radius=row['size'], 
            color='blue',
            fill=True,
            fill_color='blue',
            fill_opacity=0.6,
            popup=f"Count: {row['count']}"
        ).add_to(m);

    display(HTML(title))
    display(m)
In [18]:
%%time
sample_size = 20000
plot_location_counts(data_filtered, 'STARTLAT', 'STARTLON', 50, "<h3>Trip Start Locations (marker size reflects the number of trip starts)</h3>", n=sample_size)
plot_location_counts(data_filtered, 'ENDLAT', 'ENDLON', 50, "<h3>Trip End Locations (marker size reflects the number of trip ends)</h3>", n=sample_size)

Trip Start Locations (marker size reflects the number of trip starts)

Make this Notebook Trusted to load map: File -> Trust Notebook

Trip End Locations (marker size reflects the number of trip ends)

Make this Notebook Trusted to load map: File -> Trust Notebook
CPU times: user 46.1 s, sys: 1.23 s, total: 47.3 s
Wall time: 48.8 s

Visualization: Trip Start and End Density Distribution¶

Let's take another perspective and show how many trips started and ended in 2023 in a particular polygon (in our case rectangle). The size of each rectangle (alos referred to as tile or cell below) is 0.01 degrees of latitude x 0.01 degrees of longitude. In meters the width is not equal to the length because of the shape of the Earth and (more importantly) because the actual length of 1 degree of longitude decreases as you move away from the equator.

In [19]:
%%time


# Convert data to a GeoDataFrame
starts_geometry = [Point(xy) for xy in zip(data_filtered['STARTLON'], data_filtered['STARTLAT'])]
finishes_geometry = [Point(xy) for xy in zip(data_filtered['ENDLON'], data_filtered['ENDLAT'])]
starts_geo_df = gpd.GeoDataFrame(data_filtered[['STARTLON', 'STARTLAT']], geometry=starts_geometry, crs='EPSG:4326')
finishes_geo_df = gpd.GeoDataFrame(data_filtered[['ENDLON', 'ENDLAT']], geometry=finishes_geometry, crs='EPSG:4326')

# Create the spatial grid (square grid in this case)
s_bounds = starts_geo_df.total_bounds
e_bounds = finishes_geo_df.total_bounds

xmin = min(s_bounds[0], e_bounds[0])
ymin = min(s_bounds[1], e_bounds[1])
xmax = max(s_bounds[2], e_bounds[2])
ymax = max(s_bounds[3], e_bounds[3])

xmin, ymin, xmax, ymax 
grid_size = 0.01 

# Create grid squares
x_coords = list(range(int(xmin/grid_size), int(xmax/grid_size) + 1))
y_coords = list(range(int(ymin/grid_size), int(ymax/grid_size) + 1))

polygons = [Polygon([(x*grid_size, y*grid_size), ((x+1)*grid_size, y*grid_size),
                     ((x+1)*grid_size, (y+1)*grid_size), (x*grid_size, (y+1)*grid_size)])
            for x in x_coords for y in y_coords]

grid = gpd.GeoDataFrame({'geometry': polygons}, crs='EPSG:4326')



# Spatial join to aggregate points into the grid
joined_starts = gpd.sjoin(starts_geo_df, grid, how='left', predicate='within')
joined_finishes = gpd.sjoin(finishes_geo_df, grid, how='left', predicate='within')
CPU times: user 17.7 s, sys: 276 ms, total: 18 s
Wall time: 18.9 s
In [20]:
grid['start_and_finish_count'] = pd.concat([joined_starts.rename(columns={'STARTLON': 'LON', 'STARTLAT': 'LAT'}),
                                joined_finishes.rename(columns={'ENDLON': 'LON', 'ENDLAT': 'LAT'})]).groupby('index_right').size()
grid['start_count'] = joined_starts.groupby('index_right').size()
grid['finish_count'] = joined_finishes.groupby('index_right').size()
                    
grid['start_and_finish_count'] = grid['start_and_finish_count'].fillna(0)
grid['start_count'] = grid['start_count'].fillna(0)
grid['finish_count'] = grid['finish_count'].fillna(0)
In [21]:
def plot_grid(grid, joined, trip_count_col, title):

    grid = grid.copy()

    # Plot the results on a Folium map
    m = folium.Map(location=[joined['geometry'].y.mean(), joined['geometry'].x.mean()], zoom_start=12)
    
    # Normalize the trip count for coloring
    max_count = grid[trip_count_col].max()
    grid['color'] = grid[trip_count_col] / max_count * 100  # Scale color intensity by trip count

    # Add bins to the map
    for cell_index, row in grid.iterrows():
        if row[trip_count_col] > 0:  # Only plot bins with at least one trip start
            folium.GeoJson(row['geometry'].__geo_interface__,
                           style_function=lambda x, count=row[trip_count_col]: {
                            'fillColor': f'#{int(count * 255 / max_count):02x}{int(count * 255 / max_count):02x}{int(count * 255 / max_count):02x}',
                               'color': 'black',
                               'weight': 1,
                               'fillOpacity': 0.6
                       },
                           tooltip=f'Polygon {cell_index},\nTrip Count: {int(row[trip_count_col])}').add_to(m)

    display(HTML(title))
    display(m)
In [22]:
plot_grid(grid, joined_starts, 'start_count', "<h3>Trip Start Density (lighter colour reflects higher density)</h3>")
plot_grid(grid, joined_finishes, 'finish_count', "<h3>Trip End Density (lighter colour reflects higher density)</h3>")

Trip Start Density (lighter colour reflects higher density)

Make this Notebook Trusted to load map: File -> Trust Notebook

Trip End Density (lighter colour reflects higher density)

Make this Notebook Trusted to load map: File -> Trust Notebook

The visualizations reveal a clear concentration of bike trips in the city centre, with a significant drop-off in activity as we move outwards. This is why most areas outside the central 7x7 tiles appear similar in colour due to their relatively lower trip counts. The colour gradient ranges linearly depending on the number of trip from black to white

In [23]:
joined_starts.shape
joined_finishes.shape
Out[23]:
(694657, 4)
Out[23]:
(694657, 4)

Since trips started and ends precisely on the grid. Due to the way how the geodataframes of starts and ends are joined with the grid dataframe ("within" joining), they were not considered in the trip start and end figures for any polygon. But the number of these trips is negligible:

In [24]:
# The whole number of trips: 
data_filtered.shape[0], joined_starts.shape[0], joined_finishes.shape[0]

# The number of trips that started on the grid: they have no index_right, which corresponds to the index in the grid dataframe
(f'started on the grid: {joined_starts['index_right'].isna().sum()}')

# The number of trips that ended on the grid: they have no index_right, which corresponds to the index in the grid dataframe
(f'ended on the grid: {joined_finishes['index_right'].isna().sum()}')

#  The total number of trip starts and ends in the grid table where we now have trip_count for each polygon
(f'total starts plotted: {grid["start_count"].sum()}')
(f'total ends plotted: {grid["finish_count"].sum()}')

# The diffence between the total trip count in data_filtered and in the grid dataframe corresponds exactly to the number of trips started or ended on the grid
data_filtered.shape[0] - grid['start_count'].sum()
data_filtered.shape[0] - grid['finish_count'].sum()
Out[24]:
(694657, 694657, 694657)
Out[24]:
'started on the grid: 888'
Out[24]:
'ended on the grid: 952'
Out[24]:
'total starts plotted: 693769.0'
Out[24]:
'total ends plotted: 693705.0'
Out[24]:
np.float64(888.0)
Out[24]:
np.float64(952.0)

Visualization: Major Trip Flows (1000+ trips)¶

Let us plot the most popular trip "destinations" in 2023. In order to do this, we first need a crosstab with rows and columns corresponding to the rectangle tiles defined above.

In [25]:
joined_starts_finishes = joined_starts.merge(joined_finishes, left_index=True, right_index=True, suffixes=('_s', '_f'))
joined_starts_finishes
Out[25]:
STARTLON STARTLAT geometry_s index_right_s ENDLON ENDLAT geometry_f index_right_f
0 11.54569 48.13795 POINT (11.54569 48.13795) 1416.0 11.55782 48.16123 POINT (11.55782 48.16123) 1460.0
1 11.54431 48.12903 POINT (11.54431 48.12903) 1415.0 11.53445 48.14797 POINT (11.53445 48.14797) 1376.0
2 11.55566 48.16841 POINT (11.55566 48.16841) 1460.0 11.57649 48.16467 POINT (11.57649 48.16467) 1542.0
3 11.55567 48.16843 POINT (11.55567 48.16843) 1460.0 11.57648 48.16464 POINT (11.57648 48.16464) 1542.0
4 11.54878 48.17104 POINT (11.54878 48.17104) 1420.0 11.53007 48.16243 POINT (11.53007 48.16243) 1378.0
... ... ... ... ... ... ... ... ...
694652 11.55854 48.16719 POINT (11.55854 48.16719) 1460.0 11.55547 48.16917 POINT (11.55547 48.16917) 1460.0
694653 11.57391 48.17061 POINT (11.57391 48.17061) 1543.0 11.55952 48.17334 POINT (11.55952 48.17334) 1461.0
694654 11.56144 48.14131 POINT (11.56144 48.14131) 1499.0 11.56044 48.14094 POINT (11.56044 48.14094) 1499.0
694655 11.54494 48.12353 POINT (11.54494 48.12353) 1415.0 11.54758 48.12674 POINT (11.54758 48.12674) 1415.0
694656 11.54871 48.12376 POINT (11.54871 48.12376) 1415.0 11.54871 48.12376 POINT (11.54871 48.12376) 1415.0

694657 rows × 8 columns

In [26]:
starts_finishes_crosstab = pd.crosstab(
            joined_starts_finishes['index_right_s'], 
            joined_starts_finishes['index_right_f'],
            margins=False
)
starts_finishes_crosstab
Out[26]:
index_right_f 15.0 221.0 261.0 263.0 304.0 328.0 345.0 346.0 347.0 386.0 387.0 388.0 413.0 428.0 429.0 469.0 511.0 518.0 541.0 549.0 582.0 583.0 589.0 623.0 627.0 630.0 631.0 635.0 636.0 664.0 669.0 670.0 671.0 672.0 677.0 678.0 711.0 712.0 713.0 714.0 725.0 727.0 753.0 754.0 757.0 765.0 766.0 794.0 795.0 797.0 798.0 834.0 837.0 838.0 839.0 842.0 843.0 846.0 877.0 879.0 880.0 881.0 882.0 883.0 884.0 885.0 889.0 890.0 918.0 920.0 921.0 922.0 923.0 924.0 925.0 926.0 927.0 928.0 930.0 931.0 962.0 963.0 964.0 965.0 966.0 967.0 968.0 969.0 974.0 977.0 978.0 1003.0 1004.0 1005.0 1006.0 1007.0 1008.0 1009.0 1010.0 1011.0 1015.0 1018.0 1040.0 1042.0 1043.0 1044.0 1045.0 1046.0 1047.0 1048.0 1049.0 1050.0 1051.0 1052.0 1053.0 1054.0 1055.0 1072.0 1083.0 1084.0 1085.0 1086.0 1087.0 1088.0 1089.0 1090.0 1091.0 1092.0 1093.0 1094.0 1095.0 1097.0 1098.0 1100.0 1101.0 1115.0 1123.0 1124.0 1125.0 1126.0 1127.0 1128.0 1129.0 1130.0 1131.0 1132.0 1133.0 1134.0 1137.0 1142.0 1156.0 1158.0 1164.0 1165.0 1166.0 1167.0 1168.0 1169.0 1170.0 1171.0 1172.0 1173.0 1174.0 1175.0 1178.0 1198.0 1200.0 1201.0 1205.0 1206.0 1207.0 1208.0 1209.0 1210.0 1211.0 1212.0 1213.0 1214.0 1215.0 1216.0 1217.0 1218.0 1241.0 1242.0 1243.0 1244.0 1246.0 1247.0 1248.0 1249.0 1250.0 1251.0 1252.0 1253.0 1254.0 1255.0 1256.0 1257.0 1258.0 1260.0 1280.0 1283.0 1284.0 1285.0 1286.0 1287.0 1288.0 1289.0 1290.0 1291.0 1292.0 1293.0 1294.0 1295.0 1296.0 1297.0 1298.0 1299.0 1300.0 1301.0 1302.0 1304.0 1305.0 1306.0 1322.0 1323.0 1324.0 1325.0 1326.0 1327.0 1328.0 1329.0 1330.0 1331.0 1332.0 1333.0 1334.0 1335.0 1336.0 1337.0 1338.0 1339.0 1340.0 1341.0 1342.0 1345.0 1346.0 1347.0 1363.0 1365.0 1366.0 1367.0 1368.0 1369.0 1370.0 1371.0 1372.0 1373.0 1374.0 1375.0 1376.0 1377.0 1378.0 1379.0 1380.0 1381.0 1382.0 1383.0 1384.0 1388.0 1406.0 1407.0 1408.0 1409.0 1410.0 1411.0 1412.0 1413.0 1414.0 1415.0 1416.0 1417.0 1418.0 1419.0 1420.0 1421.0 1422.0 1423.0 1424.0 1425.0 1427.0 1428.0 1450.0 1451.0 1452.0 1453.0 1454.0 1455.0 1456.0 1457.0 1458.0 1459.0 1460.0 1461.0 1462.0 1463.0 1464.0 1465.0 1466.0 1467.0 1468.0 1469.0 1470.0 1471.0 1472.0 1489.0 1491.0 1492.0 1493.0 1494.0 1495.0 1496.0 1497.0 1498.0 1499.0 1500.0 1501.0 1502.0 1503.0 1504.0 1505.0 1506.0 1507.0 1509.0 1510.0 1512.0 1513.0 1523.0 1527.0 1528.0 1529.0 1530.0 1532.0 1534.0 1535.0 1536.0 1537.0 1538.0 1539.0 1540.0 1541.0 1542.0 1543.0 1544.0 1545.0 1546.0 1547.0 1549.0 1550.0 1551.0 1553.0 1554.0 1568.0 1569.0 1570.0 1575.0 1576.0 1577.0 1578.0 1579.0 1580.0 1581.0 1582.0 1583.0 1584.0 1585.0 1586.0 1587.0 1588.0 1589.0 1592.0 1593.0 1594.0 1595.0 1609.0 1610.0 1611.0 1612.0 1613.0 1614.0 1615.0 1616.0 1617.0 1618.0 1619.0 1620.0 1621.0 1622.0 1623.0 1624.0 1625.0 1626.0 1627.0 1628.0 1629.0 1630.0 1634.0 1636.0 1650.0 1651.0 1652.0 1653.0 1654.0 1655.0 1656.0 1657.0 1658.0 1659.0 1660.0 1661.0 1662.0 1663.0 1664.0 1665.0 1666.0 1667.0 1668.0 1669.0 1671.0 1672.0 1673.0 1674.0 1677.0 1692.0 1693.0 1694.0 1695.0 1696.0 1697.0 1698.0 1699.0 1700.0 1701.0 1702.0 1703.0 1704.0 1705.0 1706.0 1707.0 1708.0 1709.0 1710.0 1711.0 1714.0 1715.0 1719.0 1720.0 1735.0 1736.0 1737.0 1738.0 1739.0 1740.0 1741.0 1742.0 1743.0 1744.0 1745.0 1746.0 1747.0 1748.0 1749.0 1750.0 1751.0 1752.0 1753.0 1755.0 1756.0 1759.0 1760.0 1776.0 1778.0 1779.0 1780.0 1781.0 1782.0 1783.0 1784.0 1785.0 1786.0 1787.0 1788.0 1789.0 1790.0 1791.0 1792.0 1793.0 1795.0 1796.0 1797.0 1798.0 1800.0 1801.0 1802.0 1807.0 1810.0 1816.0 1820.0 1821.0 1822.0 1823.0 1824.0 1825.0 1826.0 1827.0 1828.0 1829.0 1830.0 1831.0 1832.0 1833.0 1834.0 1835.0 1836.0 1837.0 1838.0 1839.0 1850.0 1851.0 1853.0 1858.0 1859.0 1860.0 1861.0 1862.0 1863.0 1864.0 1865.0 1866.0 1867.0 1868.0 1869.0 1870.0 1871.0 1872.0 1873.0 1874.0 1875.0 1876.0 1877.0 1878.0 1879.0 1880.0 1885.0 1892.0 1898.0 1899.0 1900.0 1901.0 1902.0 1903.0 1904.0 1905.0 1906.0 1907.0 1908.0 1909.0 1911.0 1914.0 1916.0 1917.0 1919.0 1920.0 1921.0 1922.0 1923.0 1925.0 1926.0 1938.0 1939.0 1940.0 1941.0 1942.0 1943.0 1944.0 1945.0 1946.0 1947.0 1948.0 1949.0 1950.0 1955.0 1957.0 1958.0 1959.0 1961.0 1962.0 1964.0 1977.0 1983.0 1984.0 1985.0 1986.0 1987.0 1988.0 1989.0 1990.0 1991.0 1992.0 1998.0 1999.0 2000.0 2001.0 2002.0 2003.0 2004.0 2022.0 2026.0 2028.0 2029.0 2030.0 2031.0 2032.0 2033.0 2038.0 2043.0 2057.0 2060.0 2061.0 2062.0 2063.0 2068.0 2069.0 2070.0 2071.0 2072.0 2073.0 2098.0 2099.0 2101.0 2103.0 2107.0 2110.0 2111.0 2112.0 2113.0 2116.0 2117.0 2118.0 2123.0 2130.0 2143.0 2150.0 2151.0 2152.0 2154.0 2155.0 2156.0 2157.0 2158.0 2181.0 2183.0 2184.0 2192.0 2193.0 2194.0 2195.0 2196.0 2197.0 2198.0 2203.0 2233.0 2234.0 2235.0 2237.0 2238.0 2239.0 2240.0 2244.0 2275.0 2279.0 2280.0 2281.0 2301.0 2315.0 2316.0 2318.0 2320.0 2321.0 2322.0 2326.0 2328.0 2341.0 2343.0 2356.0 2357.0 2368.0 2384.0 2396.0 2397.0 2398.0 2401.0 2403.0 2422.0 2428.0 2444.0 2445.0 2446.0 2485.0 2486.0 2487.0 2524.0 2526.0 2527.0 2565.0 2566.0 2567.0 2568.0 2578.0 2586.0 2609.0 2648.0 2689.0 2693.0 2729.0 2774.0 2826.0 2853.0 2868.0 2892.0 2909.0 2950.0 3089.0
index_right_s
15.0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
221.0 0 0 1 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
263.0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
304.0 0 0 0 0 0 0 0 6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
328.0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
2868.0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0
2892.0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2909.0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
2950.0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0
3089.0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

692 rows × 733 columns

In [ ]:
 
In [27]:
def jitter_line(src: Point, dst: Point, magnitude: float = 0.0002, reverse=False):
    """
    Offsets a line between src and dst by a small perpendicular jitter.
    """
    dx = dst.x - src.x
    dy = dst.y - src.y
    length = math.hypot(dx, dy)
    if length == 0:
        return [(src.y, src.x), (dst.y, dst.x)]  # avoid divide by zero

    # Normalize perpendicular direction
    ux = -dy / length
    uy = dx / length

    jitter_x = ux * (magnitude + np.random.normal(loc=0, scale=magnitude/2))
    jitter_y = uy * (magnitude + np.random.normal(loc=0, scale=magnitude/2))

    jittered_src = (src.y + jitter_y, src.x + jitter_x)  # (lat, lon)
    jittered_dst = (dst.y + jitter_y, dst.x + jitter_x)

    return [jittered_src, jittered_dst]
In [28]:
cmap = cm.LinearColormap(['lightblue','darkblue'], vmin=0, vmax=1)

# compute centroids
centroids = grid.geometry.centroid

# centre map
mean_x, mean_y = centroids.x.mean(), centroids.y.mean()
m = folium.Map(location=[mean_y, mean_x], zoom_start=12);

# color scale
min_c = starts_finishes_crosstab.values[starts_finishes_crosstab.values > 0].min()
max_c = starts_finishes_crosstab.values.max()

for _, row in grid.iterrows():
    if row['start_and_finish_count'] > 0:  # Only plot bins with at least one trip start
        _ = folium.GeoJson(row['geometry'].__geo_interface__,
                           style_function=lambda x : {
                            'fillColor': '#d3d3d3',
                               'color': 'black',
                               'weight': 1,
                               'fillOpacity': 0.6
                       },
                           ).add_to(m)


# draw lines & arrows
for src, row in starts_finishes_crosstab.iterrows():
    for dst, cnt in row.items():
        if cnt > 1000:
            src_pt, dst_pt = centroids.loc[src], centroids.loc[dst]
            coords = jitter_line(src_pt, dst_pt, magnitude=0.002, reverse=src > dst)
            weight =  (cnt - min_c) / (max_c - min_c) * 10
            color = 'blue'

            line = folium.PolyLine(locations=coords, weight=weight,
                                  color=color, opacity=0.9,
                                  tooltip=f'Trips: {cnt}').add_to(m);

            _ = PolyLineTextPath(line, '➤', repeat=False, offset=6,
                                     attributes={'fill': color,
                                                 'font-weight': 'bold',
                                                 'font-size': '18'}).add_to(m);


display(HTML("<h3>Major Trip Flows (marker and line thickness reflects the number of trips)</h3>"))
display(m)
/var/folders/0j/hc4b4c153dzfbvkzcxfbtjv80000gn/T/ipykernel_1324/3679063521.py:4: UserWarning: Geometry is in a geographic CRS. Results from 'centroid' are likely incorrect. Use 'GeoSeries.to_crs()' to re-project geometries to a projected CRS before this operation.

  centroids = grid.geometry.centroid

Major Trip Flows (marker and line thickness reflects the number of trips)

Make this Notebook Trusted to load map: File -> Trust Notebook
  • Flow representation:

    • Arrows: Connect different grid cells
    • Dots: Represent flows within the same cell
    • Size/Thickness: Proportional to trip volume
    • Jittering: Applied to prevent line overlap
  • Key Observations

    • Highest concentration of flows in the city centre (administrative, residential and business districts)
    • Notable exception: Olympiapark area shows some significant activity
    • Most major flows occur between adjacent cells or within single cells
    • Natural barriers (e.g., railway tracks) effectively limit cross-flow

Trip Duration Distribution¶

Let's examine the distribution of trip durations to understand typical journey lengths (in minutes). We'll visualize this using:

  • A boxplot to show the statistical summary (median, quartiles, and outliers)
  • A histogram to display the frequency distribution of trip durations

This analysis will help us identify common trip lengths and any unusual patterns in journey times.

In [29]:
data_filtered['trip_duration_minutes'] = (data_filtered['ENDTIME'] - data_filtered['STARTTIME']).dt.total_seconds() / 60
In [30]:
fig, axes = plt.subplots(1, 2, figsize=(12, 6))

sns.boxplot(x=data_filtered['trip_duration_minutes'], ax=axes[0])
axes[0].set_title('All trip durations')

sns.boxplot(x=data_filtered['trip_duration_minutes'], ax=axes[1])
axes[1].set_xlim(-100, 100)
axes[1].set_title('All trip durations below 60 minutes')

plt.tight_layout();
No description has been provided for this image
In [31]:
fig, axes = plt.subplots(1, 2, figsize=(12, 6))  # 1 row, 2 columns

# First plot
sns.boxplot(data_filtered['trip_duration_minutes'], ax=axes[0])
axes[0].set_title('All trip durations')

# Second plot
sns.boxplot(data_filtered['trip_duration_minutes'], ax=axes[1])
axes[1].set_ylim(-100, 100)
axes[1].set_title('All trip durations below 60 minutes')

plt.tight_layout();
No description has been provided for this image

According to the standard boxplot settings, outliers (as depicted by empty circles) are the points extending beyond 1.5 times the interquartile range (from 7 to 19) added on both ends of this range. This means that all trip durations above 37 and below 0 are considered outliers.

In [32]:
data_filtered[['trip_duration_minutes']].describe()
Out[32]:
trip_duration_minutes
count 694657.000000
mean 26.332067
std 133.764000
min -56.000000
25% 7.000000
50% 11.000000
75% 19.000000
max 14557.000000
In [33]:
data_filtered['trip_duration_minutes'].quantile(0.95)
Out[33]:
np.float64(51.0)

We notice a few negative trip durations, so let’s take a closer look at them. These cases happen when a trip spans the switch from daylight saving time to standard time. In those instances, the recorded duration is one hour shorter than the actual time. Since there are only a few of them, we are not yet disregarding them. We also expect some similar cases on the day daylight saving time starts — in those, the duration would appear one hour longer than it really was.

In [34]:
data_filtered[data_filtered['trip_duration_minutes'] < 0]
data_filtered[data_filtered['trip_duration_minutes'] < 0].shape
Out[34]:
trip_index STARTTIME ENDTIME STARTLAT STARTLON ENDLAT ENDLON RENTAL_STATION_NAME RETURN_STATION_NAME trip_duration_minutes
629483 643458 2023-10-29 02:37:00 2023-10-29 02:16:00 48.12044 11.57651 48.17835 11.57484 Kolumbusplatz NaN -21.0
629490 643465 2023-10-29 02:41:00 2023-10-29 02:00:00 48.13998 11.50744 48.11326 11.53177 NaN NaN -41.0
629503 643478 2023-10-29 02:49:00 2023-10-29 02:04:00 48.14395 11.55688 48.17062 11.53471 NaN NaN -45.0
629504 643479 2023-10-29 02:50:00 2023-10-29 02:18:00 48.12452 11.55557 48.13293 11.50309 Zenettiplatz NaN -32.0
629507 643482 2023-10-29 02:52:00 2023-10-29 02:10:00 48.13000 11.57317 48.13107 11.60504 NaN NaN -42.0
629508 643483 2023-10-29 02:52:00 2023-10-29 02:07:00 48.16184 11.58603 48.15727 11.56178 NaN NaN -45.0
629510 643485 2023-10-29 02:54:00 2023-10-29 02:32:00 48.18261 11.55426 48.16695 11.62009 NaN NaN -22.0
629514 643489 2023-10-29 02:55:00 2023-10-29 02:25:00 48.15124 11.56574 48.13593 11.63376 NaN Berg am Laim -30.0
629515 643490 2023-10-29 02:55:00 2023-10-29 02:31:00 48.18258 11.55436 48.16693 11.62008 NaN NaN -24.0
629519 643494 2023-10-29 02:56:00 2023-10-29 02:06:00 48.13114 11.58220 48.13114 11.58220 Boschbrücke Boschbrücke -50.0
629520 643495 2023-10-29 02:57:00 2023-10-29 02:01:00 48.14385 11.58622 48.14355 11.58616 NaN NaN -56.0
629521 643496 2023-10-29 02:57:00 2023-10-29 02:06:00 48.15427 11.57630 48.14883 11.57755 NaN NaN -51.0
629522 643497 2023-10-29 02:58:00 2023-10-29 02:11:00 48.12452 11.55557 48.11521 11.57917 Zenettiplatz NaN -47.0
629524 643499 2023-10-29 02:59:00 2023-10-29 02:27:00 48.17161 11.50712 48.19163 11.55271 NaN NaN -32.0
Out[34]:
(14, 10)

For the histogarm let's only consider trip durations below 500 minutes so that the shape of the distribution can be recognised better

In [35]:
sns.histplot(data_filtered.query('trip_duration_minutes < 500')['trip_duration_minutes']);
plt.title("Distribution of Trip Durations Below 500 Minutes");
No description has been provided for this image

The distribution appears to belong to the exponential family, with shorter trip durations being more common and very long durations becoming increasingly rare. Common sense suggests that trips lasting more than a day (for example, over 1000 minutes) do not align with the intended use case of bike sharing. These trips are clear outliers based on the default boxplot settings (see above). Extremely long durations likely result from technical issues, misuse, vandalism, or a combination of these factors. Trips longer than 1000 minutes account for less than 0.5% of all trips. Overall, however, the distribution of trip durations looks plausible.

In [36]:
data_filtered.query('trip_duration_minutes > 1000').shape[0] / data_filtered.shape[0]
Out[36]:
0.0034304700017418668

Trip Distance Distribution¶

Another important aspect to consider is trip distance. While we do not have data on the exact trajectories of each trip, we do have the coordinates of the starting and ending points. For simplicity, we will refer to the straight-line distance between these points as the “trip distance.” As will be shown below, this measure can differ substantially from the actual path taken.

In [37]:
data_filtered['trip_distance_km'] = data_filtered.apply(lambda row: geodesic((row['STARTLAT'], row['STARTLON']),
                                                           (row['ENDLAT'], row['ENDLON'])).km, axis=1);
In [38]:
data_filtered[['trip_distance_km']].describe()
Out[38]:
trip_distance_km
count 694657.000000
mean 1.851332
std 1.644747
min 0.000000
25% 0.763568
50% 1.458382
75% 2.501869
max 50.218839
In [39]:
sns.boxplot(data_filtered['trip_distance_km']);
plt.title('Trip Distances Boxplot');
No description has been provided for this image

According to the default boxplot settings, trips with distance over 5.1 km are considered outliers

In [40]:
sns.histplot(data_filtered['trip_distance_km']);
plt.title('Trip Distances Histogram');
No description has been provided for this image

In many respects, the distribution of trip distance resembles that of trip duration: it has a similar shape and likely belongs to the exponential distribution family as well. One notable feature, however, is a sharp peak near zero. Specifically, around 16% of all trips cover less than 500 metres, and approximately 3% show a trip distance of exactly zero. This does not necessarily indicate a problem with the data. It is plausible that some users had to terminate their trip shortly after starting due to technical issues. However, if that were the case, we would expect to see a corresponding spike in the distribution of trip durations — which we do not. One could still argue that these trips might last a few minutes while the user attempts to resolve the issue, resulting in a small but nonzero duration. After all, users cannot control how trip time is recorded, but they can influence the measured distance by not moving the bike.

That said, a more likely explanation for this peak is that bike sharing is not always used for one-way travel from point A to point B. In some cases, users may ride from A to B and then return to A, making it a round trip. This behaviour would not be captured by our definition of trip distance, which only accounts for the straight-line distance between the start and end points.

In [41]:
data_filtered.query('trip_distance_km < .5').shape[0] / data_filtered.shape[0] 
data_filtered.query('trip_distance_km < 0.01').shape[0] / data_filtered.shape[0] 
data_filtered.query('trip_distance_km == 0').shape[0] / data_filtered.shape[0] 
Out[41]:
0.15562356673869263
Out[41]:
0.04736006403160121
Out[41]:
0.03454078775568374

The correlation between the trip distance and the trip duration is also extremely low, which can be an indication that our measure of trip distance is not reliable.

In [42]:
data_filtered[['trip_distance_km', 'trip_duration_minutes']].corr()
Out[42]:
trip_distance_km trip_duration_minutes
trip_distance_km 1.000000 0.021396
trip_duration_minutes 0.021396 1.000000

Trip Starts Time series analysis¶

Let's now visualise trip start times from a temporal perspective. We begin by plotting the total number of trip starts, aggregated by month, day, and hour.

In [43]:
monthly_aggregated_starts = data_filtered.resample('1MS', on='STARTTIME')['STARTTIME'].count()
monthly_aggregated_starts.plot(figsize=(15,6), title="Trip Starts per Month");
No description has been provided for this image
In [44]:
daily_aggregated_starts = data_filtered.resample('1D', on='STARTTIME')['STARTTIME'].count()
daily_aggregated_starts.plot(figsize=(15,6),  title="Trip Starts per Day");
No description has been provided for this image
In [45]:
hourly_aggregated_starts = data_filtered.resample('1h', on='STARTTIME')['STARTTIME'].count()
hourly_aggregated_starts.plot(figsize=(15,6),  title="Trip Starts per Hour");
No description has been provided for this image

Since the hourly plot is quite dense, we will zoom in on a shorter time period — specifically from the end of August to the beginning of September — to better examine the details.

In [46]:
hourly_aggregated_starts['2023-08-19' : '2023-09-02'].plot(figsize=(15,6),  title="Trip Starts per Hour (Zoomed In)");
No description has been provided for this image

These visualisations highlight several characteristics of the data:

  • Intra-year seasonality: As expected, more trips are made during the warmer months compared to the colder ones. While comparing raw monthly totals is not entirely accurate (they should ideally be normalised by the number of days in each month), the seasonal pattern is clearly visible and aligns with common sense.
  • Intra-week seasonality: There seems to be some fluctuation in daily trip counts that appears to follow a weekly rhythm, superimposed on the general yearly trend.
  • Outliers: A few days stand out with unusually high trip counts. Notably, these correspond to public transport strikes on 2–3 March 2023 and 19 May 2023.
  • Intraday patterns: The hourly chart largely mirrors the shape of the daily chart, but details are harder to distinguish at this scale. When zooming in on a narrower window (e.g. 19 August to 2 September), we observe clear intraday seasonality — with fewer trips at night and more during the day. On most days, there is one distinct global peak in activity, often accompanied by several smaller local peaks.

To explore intra-week and intraday seasonality more directly, let us plot the average number of trips for each day of the week and each hour of the day.

In [47]:
average_starts_by_weekday = daily_aggregated_starts.groupby(daily_aggregated_starts.index.weekday).mean()
average_starts_by_weekday.plot(figsize=(15,6), ylim=(0, 2300),  title="Average Starts Count per Weekday");
No description has been provided for this image
In [48]:
average_starts_by_hour_of_the_day = hourly_aggregated_starts.groupby(hourly_aggregated_starts.index.hour).mean()
average_starts_by_hour_of_the_day.plot(figsize=(15,6),  title="Average Starts Count per Hour of the Day");
No description has been provided for this image

The average trip counts for Thursday and Friday (represented by values 3 and 4 on the x-axis) are slightly higher than for other weekdays, which may seem counterintuitive — especially since the overall differences across weekdays are not particularly pronounced in the chart. A plausible explanation is that the public transport strikes mentioned earlier took place on a Thursday and a Friday, potentially skewing the average values for these days.

Intraday seasonality, on the other hand, is more distinct and aligns well with expectations. There is a local peak between 8:00 and 9:00 in the morning, corresponding to the beginning of the working day. A second, even more prominent peak occurs between 18:00 and 19:00, as people finish work. Trip activity generally drops at night, with the lowest values observed between 4:00 and 6:00 in the morning.

Stations¶

Let us now analyse how many trips began at each station in 2023. As the data shows, even the busiest stations account for less than 1% of all trips individually, highlighting that no individual station plays any significant role in the overall traffic.

In [49]:
data_filtered['RENTAL_STATION_NAME'].value_counts(dropna=False).head(20)
# NaN corresponds to trips that started outside stations
Out[49]:
RENTAL_STATION_NAME
NaN                               538323
Sandstraße                          3476
TUM Arcisstraße                     2623
Hauptbahnhof Nord                   2601
Münchner Freiheit                   2402
Universität                         2377
Olympiazentrum                      2370
Hackerbrücke                        2157
Königinstraße                       1971
Pasing                              1797
Rotkreuzplatz                       1753
Maillingerstraße                    1745
Technische Universität München      1742
Goetheplatz (Nord)                  1669
Brudermühlstraße                    1635
Türkenstraße                        1606
Hohenzollernplatz                   1586
Josephsplatz                        1505
Klinikum Großhadern                 1497
Nordbad                             1438
Name: count, dtype: int64

To ensure that no other locations recorded unusually high numbers of trip starts, we also counted the number of trips starting from each unique coordinate pair in the STARTLAT and STARTLON columns. The results confirm the earlier findings: the highest counts still correspond to known station locations, and no individual point exceeds the number of trip starts recorded at the busiest station.

In [50]:
data_filtered[['STARTLAT',	'STARTLON', 'STARTTIME']]\
    .groupby(['STARTLAT',	'STARTLON']).count().sort_values('STARTTIME', ascending=False).head(10)
Out[50]:
STARTTIME
STARTLAT STARTLON
48.15128 11.55832 3476
48.14818 11.56833 2623
48.14210 11.56099 2601
48.16199 11.58654 2402
48.14954 11.58065 2377
48.17842 11.55658 2370
48.14281 11.54874 2157
48.14984 11.58450 1971
48.14873 11.46176 1797
48.15291 11.53310 1753

Let us now repeat the same analysis for trip end locations. The conclusion is similar: there are no coordinate pairs with an exceptionally high number of trip ends other than those corresponding to known stations. And as before, even the busiest stations account for only a small fraction of total trips.

In [51]:
data_filtered['RETURN_STATION_NAME'].value_counts(dropna=False).head(20)
data_filtered[['ENDLAT',	'ENDLON', 'ENDTIME']]\
    .groupby(['ENDLAT',	'ENDLON']).count().sort_values('ENDTIME', ascending=False).head(10)
Out[51]:
RETURN_STATION_NAME
NaN                               571970
Sandstraße                          2966
Hauptbahnhof Nord                   2456
TUM Arcisstraße                     2134
Olympiazentrum                      2091
Universität                         1976
Pasing                              1826
Münchner Freiheit                   1783
Hackerbrücke                        1701
Königinstraße                       1639
Rotkreuzplatz                       1518
Maillingerstraße                    1388
Technische Universität München      1382
Brudermühlstraße                    1365
Josephsplatz                        1324
Türkenstraße                        1324
Klinikum Großhadern                 1319
Goetheplatz (Nord)                  1269
Nordbad                             1202
Maibaumplatz Garching               1083
Name: count, dtype: int64
Out[51]:
ENDTIME
ENDLAT ENDLON
48.15128 11.55832 2966
48.14210 11.56099 2456
48.14818 11.56833 2134
48.17842 11.55658 2091
48.14954 11.58065 1976
48.14873 11.46176 1826
48.16199 11.58654 1783
48.14281 11.54874 1701
48.14984 11.58450 1639
48.15291 11.53310 1518

Visualization: Stations as Trip Start and End Locations¶

Next, let us plot trip starts and ends at each station on a map, using the same approach as we did previously for individual trips. As expected, stations with relatively high numbers of trip starts and ends are located in and around the city centre, where station density is also higher, as well as in the central areas of outlying districts and municipalities. However, it is important to keep in mind that the contribution of each individual station remains small relative to the total number of trips recorded in 2023.

In [52]:
plot_location_counts(data_filtered.query('RENTAL_STATION_NAME.notna()'), 'STARTLAT', 'STARTLON', 10, "<h3>Trip Starts at Stations (marker size reflects the number of trip starts)</h3>", frac=1)
plot_location_counts(data_filtered.query('RETURN_STATION_NAME.notna()'), 'ENDLAT', 'ENDLON', 10, "<h3>Trip Ends at Stations (marker size reflects the number of trip starts)</h3>", frac=1)

Trip Starts at Stations (marker size reflects the number of trip starts)

Make this Notebook Trusted to load map: File -> Trust Notebook

Trip Ends at Stations (marker size reflects the number of trip starts)

Make this Notebook Trusted to load map: File -> Trust Notebook

Visualization: Trip start and end time series per station¶

Although individual stations do not account for a large share of overall bike movements, it is still interesting to examine how the number of trips starting and ending at a specific station changes over time. The station Hauptbahnhof Nord is relatively busy, but it has a limited capacity of only 10 bikes (with space for perhaps 4–5 additional bikes nearby). We observe that within any given 20-minute interval, no more than 7 trips typically start, with 1 trip start being the most common non-zero value (zero is the most frequent value). However, the number of trip starts does not necessarily reflect usage of different bikes — in theory, the same bike could be rented, returned, and rented again within the same 20-minute window. The distribution of trip finishes is similar to that of trip starts. The plot showing the net change in the number of bikes — referred to here as the bike delta — indicates that the number of bikes at this station remains fairly stable. This is consistent with the station’s small capacity. Moreover, the bike delta oscillates around zero, suggesting that, over the long term, the number of bikes rented is roughly equal to the number returned — a pattern that also seems reasonable.

In [53]:
def plot_trips_for_stations(df, station_name, frequency):
    
    starts_for_station = df.query('RENTAL_STATION_NAME == @station_name').set_index('STARTTIME').resample(frequency).count()['ENDTIME']
    finishes_for_station = df.query('RETURN_STATION_NAME == @station_name').set_index('ENDTIME').resample(frequency).count()['STARTTIME']
    starts_and_finishes = pd.concat([starts_for_station, finishes_for_station], axis=1)
    n_bikes_delta_for_station = starts_and_finishes['ENDTIME'].fillna(0) - starts_and_finishes['STARTTIME'].fillna(0)

    fig = plt.figure();
    plt.title(f'Number of trip starts for {station_name}')
    ax = starts_for_station.plot(figsize=(15, 6));
    print(starts_for_station.value_counts(dropna=False))


    fig = plt.figure();
    plt.title(f'Number of trip ends for {station_name}')
    ax = finishes_for_station.plot(figsize=(15, 6));
    print(finishes_for_station.value_counts(dropna=False))

    fig = plt.figure();
    plt.title(f'Difference between number of starts and number of finishes for {station_name}')
    ax = n_bikes_delta_for_station.plot(figsize=(15, 6));
    print(n_bikes_delta_for_station.value_counts(dropna=False))
In [54]:
plot_trips_for_stations(data_filtered, 'Hauptbahnhof Nord', '20Min')
ENDTIME
0    24030
1     1738
2      281
3       67
4       17
5        5
7        1
Name: count, dtype: int64
STARTTIME
0    24078
1     1825
2      239
3       39
4        9
Name: count, dtype: int64
 0.0    22569
-1.0     1551
 1.0     1541
 2.0      239
-2.0      190
 3.0       45
-3.0       34
 4.0       10
-4.0        8
 5.0        2
 6.0        1
Name: count, dtype: int64
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image

Let us now consider a less busy station, which has the same capacity as Hauptbahnhof Nord (10 bikes). The overall pattern is very similar to what we observed for Hauptbahnhof Nord, the only difference is that positive values for both trip starts and finishes occur even less frequently at this location.

In [55]:
plot_trips_for_stations(data_filtered, 'Messestadt West', '20Min')
ENDTIME
0    25345
1      369
2       48
3       13
4        2
5        1
Name: count, dtype: int64
STARTTIME
0    25314
1      409
2       49
3        3
5        1
4        1
Name: count, dtype: int64
 0.0    24980
-1.0      360
 1.0      348
-2.0       41
 2.0       39
 3.0        6
-3.0        3
 4.0        2
-5.0        1
-4.0        1
Name: count, dtype: int64
No description has been provided for this image
No description has been provided for this image
No description has been provided for this image
In [ ]:
 
In [ ]:
 

Trip history reconstruction¶

One of the key aspects a bikesharing provider may be interested in is idle time — the period during which a bike remains unused, despite being available for rental. There are various ways to quantify idle time, but its relevance for business decision-making is clear: it reflects how efficiently the available assets (i.e. bikes) are being deployed to meet demand. Prolonged idle times may also indicate underlying issues, such as technical malfunctions, especially when explicit problem reports are unavailable or unreliable.

Identifying excessive idle times — for instance, the duration between the end of one trip and the start of the next — would be straightforward if we had access to full trip histories for individual bikes. Unfortunately, the dataset does not include bike IDs or other identifiers that allow us to track bikes directly. As a result, we attempt to reconstruct partial trip histories by applying a set of restrictive assumptions.

The core idea is to assume that one trip is a continuation of another only when there is no ambiguity based on the available data. This conservative approach likely results in a larger number of reconstructed “trip histories” than exist in reality — that is, we may be splitting a real bike’s activity into multiple shorter histories. However, we aim for these reconstructed sequences to represent plausible, consistent fragments of actual usage. In many cases, it is reasonable to assume that a trip is a continuation of the previous one when it starts shortly after the previous trip ends and begins at the same location. However, this logic breaks down when more than one bike is potentially available at that location at the same time. In such cases — for example, when two or more trips end at the same place just before another begins — we cannot determine which bike was used, and so the reconstructed trip history must end at that point.

The task of estimating idle time would be significantly easier if each point in space saw at most one trip end and one trip start. In such cases, we could confidently interpret the time gap between the end of one trip and the beginning of the next at that location as the bike’s idle time. If no follow-up trip occurs at that point, we might infer that the bike was moved (e.g. for rebalancing), taken out of service due to a technical issue, or simply not used again within the time period covered by our data. Again, these inferences would be much more reliable if we could link each trip to a specific bike.

To make this more concrete, let us determine — for each trip — how many bikes are expected to be present at the start location shortly before the trip’s start time, and how many are expected at the end location shortly before the trip’s end time. Ideally, there should be exactly one bike at the start location (which would be the one used for the trip) and zero bikes at the end location (as the bike is expected to arrive there). Any deviation from this — for example, multiple bikes at the start location or existing bikes at the end location — introduces ambiguity into the reconstruction of trip histories.

In [56]:
# let us construct a dataframe where each row is a trip start or a trip end "event".

data_filtered['start_coord_zipped'] = list(zip(data_filtered['STARTLAT'], data_filtered['STARTLON']))
data_filtered['end_coord_zipped'] = list(zip(data_filtered['ENDLAT'], data_filtered['ENDLON']))

start_events = pd.DataFrame({
    'trip_index':  data_filtered['trip_index'],
    'timestamp':       data_filtered['STARTTIME'],
    'location':      data_filtered['start_coord_zipped'],
    'kind':     'start'
})
end_events = pd.DataFrame({
    'trip_index':  data_filtered['trip_index'],
    'timestamp':       data_filtered['ENDTIME'],
    'location':      data_filtered['end_coord_zipped'],
    'kind':     'end'
})
start_and_end_events = pd.concat([start_events, end_events], ignore_index=True)

# sort all events by timestamp and by kind: first ends, then starts
start_and_end_events.sort_values(['timestamp', 'kind'], inplace=True)
In [57]:
start_and_end_events
Out[57]:
trip_index timestamp location kind
0 0 2023-01-01 00:26:00 (48.13795, 11.54569) start
1 1 2023-01-01 00:30:00 (48.12903, 11.54431) start
2 2 2023-01-01 00:32:00 (48.16841, 11.55566) start
3 3 2023-01-01 00:34:00 (48.16843, 11.55567) start
4 4 2023-01-01 00:35:00 (48.17104, 11.54878) start
... ... ... ... ...
1389310 710102 2024-01-01 00:07:00 (48.17334, 11.55952) end
1389304 710096 2024-01-01 00:08:00 (48.17337, 11.55951) end
1388734 709522 2024-01-01 00:09:00 (48.18169, 11.57342) end
1389308 710100 2024-01-01 00:39:00 (48.24959, 11.65319) end
1389124 709916 2024-01-01 16:00:00 (48.12701, 11.55865) end

1389314 rows × 4 columns

In [58]:
start_and_end_events['is_start'] = (start_and_end_events['kind'] == 'start').astype(int)
start_and_end_events['is_end']   = (start_and_end_events['kind'] == 'end').astype(int)

# compute the number of starts and ends “so far, up to but NOT including this row” counts within each location group (i.e. at each location)
# However, potentially this introduces some arbitrariness: if several trips start or end at the same location and at the same time, whether they will be taken into account in a cumulated sum, depends on the order, in which they appear in the 
# dataframe. We will deal with this later. 
start_and_end_events_by_location = start_and_end_events.groupby('location')
start_and_end_events['n_starts_before'] = start_and_end_events_by_location['is_start'].cumsum() - start_and_end_events['is_start']
start_and_end_events['n_ends_before']   = start_and_end_events_by_location['is_end'].cumsum()   - start_and_end_events['is_end']

# drop the indicators 
start_and_end_events = start_and_end_events.drop(['is_start','is_end'], axis=1)
start_and_end_events
Out[58]:
trip_index timestamp location kind n_starts_before n_ends_before
0 0 2023-01-01 00:26:00 (48.13795, 11.54569) start 0 0
1 1 2023-01-01 00:30:00 (48.12903, 11.54431) start 0 0
2 2 2023-01-01 00:32:00 (48.16841, 11.55566) start 0 0
3 3 2023-01-01 00:34:00 (48.16843, 11.55567) start 0 0
4 4 2023-01-01 00:35:00 (48.17104, 11.54878) start 0 0
... ... ... ... ... ... ...
1389310 710102 2024-01-01 00:07:00 (48.17334, 11.55952) end 0 0
1389304 710096 2024-01-01 00:08:00 (48.17337, 11.55951) end 0 0
1388734 709522 2024-01-01 00:09:00 (48.18169, 11.57342) end 0 0
1389308 710100 2024-01-01 00:39:00 (48.24959, 11.65319) end 1133 1082
1389124 709916 2024-01-01 16:00:00 (48.12701, 11.55865) end 1 1

1389314 rows × 6 columns

In [59]:
# Difference between the number of ends before the current trip (at this location) and the number of starts before the current trip (at this location) is equal to the assumed number of bikes 
# at this location shortly before this trip
start_and_end_events['assumed_bike_number_at_timestamp'] = start_and_end_events['n_ends_before'] - start_and_end_events['n_starts_before']
start_and_end_events
Out[59]:
trip_index timestamp location kind n_starts_before n_ends_before assumed_bike_number_at_timestamp
0 0 2023-01-01 00:26:00 (48.13795, 11.54569) start 0 0 0
1 1 2023-01-01 00:30:00 (48.12903, 11.54431) start 0 0 0
2 2 2023-01-01 00:32:00 (48.16841, 11.55566) start 0 0 0
3 3 2023-01-01 00:34:00 (48.16843, 11.55567) start 0 0 0
4 4 2023-01-01 00:35:00 (48.17104, 11.54878) start 0 0 0
... ... ... ... ... ... ... ...
1389310 710102 2024-01-01 00:07:00 (48.17334, 11.55952) end 0 0 0
1389304 710096 2024-01-01 00:08:00 (48.17337, 11.55951) end 0 0 0
1388734 709522 2024-01-01 00:09:00 (48.18169, 11.57342) end 0 0 0
1389308 710100 2024-01-01 00:39:00 (48.24959, 11.65319) end 1133 1082 -51
1389124 709916 2024-01-01 16:00:00 (48.12701, 11.55865) end 1 1 0

1389314 rows × 7 columns

As noted earlier, when multiple trips start or end at the same location and timestamp, their effect on a cumulative sum will depend on their order in the DataFrame. To avoid inconsistencies caused by this ordering, we take a conservative approach: we identify and mark all start and end events that occur at locations and times where two or more trips start or end simultaneously. We assign these events an assumed_bike_number_at_timestamp value of 9999 — a clearly implausible number used as a flag to indicate ambiguity. This ensures that such events are excluded from being treated as part of an ongoing trip chain and can only be interpreted as the start or end of a reconstructed trip sequence.

In [60]:
%%time
# Count occurrences of each (timestamp, coordinates, type) combination
grouped_start_and_end_events = start_and_end_events.groupby(['timestamp', 'location', 'kind']).size().reset_index(name='count')

# Filter to only combinations that appear 2+ times (duplicates)
ambigouos_time_location_combinations = grouped_start_and_end_events[grouped_start_and_end_events['count'] >= 2][['timestamp', 'location']].drop_duplicates()

# Mark duplicates as True for efficient join
ambigouos_time_location_combinations['is_ambiguous_ambigouos_time_location_combination'] = True

# Join back to original dataframe on timestamp and coordinates, this will mark both start and end rows with matching timestamp+coordinates
start_and_end_events = start_and_end_events.merge(
    ambigouos_time_location_combinations, 
    on=['timestamp', 'location'], 
    how='left'
)

# Set special value where duplicates were found
start_and_end_events.loc[start_and_end_events['is_ambiguous_ambigouos_time_location_combination'] == True, 'assumed_bike_number_at_timestamp'] = 9999

# Drop the helper column
start_and_end_events = start_and_end_events.drop('is_ambiguous_ambigouos_time_location_combination', axis=1)
    
CPU times: user 3.37 s, sys: 271 ms, total: 3.64 s
Wall time: 3.68 s

Now let's join the assumed bike counts at different timestamps to our data_filtered dataframe in order to find out how may bikes are expected at each trip's start and end at the respective location.

In [61]:
data_filtered = data_filtered.merge(start_and_end_events.query('kind == "start"')[['trip_index', 'assumed_bike_number_at_timestamp']], on='trip_index', how='left')
data_filtered.rename(columns={'assumed_bike_number_at_timestamp': 'assumed_bike_number_at_starttime'}, inplace=True)
data_filtered = data_filtered.merge(start_and_end_events.query('kind == "end"')[['trip_index', 'assumed_bike_number_at_timestamp']], on='trip_index', how='left')
data_filtered.rename(columns={'assumed_bike_number_at_timestamp': 'assumed_bike_number_at_endtime'}, inplace=True)
data_filtered
Out[61]:
trip_index STARTTIME ENDTIME STARTLAT STARTLON ENDLAT ENDLON RENTAL_STATION_NAME RETURN_STATION_NAME trip_duration_minutes trip_distance_km start_coord_zipped end_coord_zipped assumed_bike_number_at_starttime assumed_bike_number_at_endtime
0 0 2023-01-01 00:26:00 2023-01-01 00:51:00 48.13795 11.54569 48.16123 11.55782 NaN Barbarastr 25.0 2.741423 (48.13795, 11.54569) (48.16123, 11.55782) 0 0
1 1 2023-01-01 00:30:00 2023-01-01 00:42:00 48.12903 11.54431 48.14797 11.53445 NaN NaN 12.0 2.230186 (48.12903, 11.54431) (48.14797, 11.53445) 0 0
2 2 2023-01-01 00:32:00 2023-01-01 00:45:00 48.16841 11.55566 48.16467 11.57649 NaN NaN 13.0 1.604274 (48.16841, 11.55566) (48.16467, 11.57649) 0 0
3 3 2023-01-01 00:34:00 2023-01-01 00:46:00 48.16843 11.55567 48.16464 11.57648 NaN NaN 12.0 1.604289 (48.16843, 11.55567) (48.16464, 11.57648) 0 0
4 4 2023-01-01 00:35:00 2023-01-01 00:51:00 48.17104 11.54878 48.16243 11.53007 NaN NaN 16.0 1.689230 (48.17104, 11.54878) (48.16243, 11.53007) 0 0
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
694652 710101 2023-12-31 23:52:00 2023-12-31 23:57:00 48.16719 11.55854 48.16917 11.55547 NaN NaN 5.0 0.317203 (48.16719, 11.55854) (48.16917, 11.55547) 1 0
694653 710102 2023-12-31 23:52:00 2024-01-01 00:07:00 48.17061 11.57391 48.17334 11.55952 NaN NaN 15.0 1.112501 (48.17061, 11.57391) (48.17334, 11.55952) 1 0
694654 710103 2023-12-31 23:53:00 2023-12-31 23:57:00 48.14131 11.56144 48.14094 11.56044 NaN NaN 4.0 0.085036 (48.14131, 11.56144) (48.14094, 11.56044) 1 0
694655 710104 2023-12-31 23:54:00 2023-12-31 23:59:00 48.12353 11.54494 48.12674 11.54758 NaN NaN 5.0 0.407460 (48.12353, 11.54494) (48.12674, 11.54758) 1 0
694656 710105 2023-12-31 23:54:00 2024-01-01 00:06:00 48.12376 11.54871 48.12376 11.54871 Kreisverwaltungsreferat Kreisverwaltungsreferat 12.0 0.000000 (48.12376, 11.54871) (48.12376, 11.54871) -403 -404

694657 rows × 15 columns

In [62]:
data_filtered['assumed_bike_number_at_starttime'].value_counts(dropna=False)
data_filtered['assumed_bike_number_at_endtime'].value_counts(dropna=False)
Out[62]:
assumed_bike_number_at_starttime
 1       524919
 0        11145
 9999      9143
 2         5349
-1         1739
          ...  
-586          1
 82           1
-562          1
-565          1
-623          1
Name: count, Length: 725, dtype: int64
Out[62]:
assumed_bike_number_at_endtime
 0       566535
 9999      8272
 1         5808
-1         2474
-2         1463
          ...  
-542          1
-515          1
-588          1
-585          1
-586          1
Name: count, Length: 706, dtype: int64

We see however that in a large majority of cases the assumed bike number at start time is 1 (the bike itself ready to depart) and the assumed bike number at endtime is 0 (no other bikes at the destination location)

The next step in reconstructing trip histories involves performing a self-join on data_filtered — specifically, on the location coordinates. The goal is to match the end coordinates of one trip with the start coordinates of another. This allows us to identify potential consecutive trips and combine them into a single row, representing a possible continuation within a trip chain.

In [63]:
%%time
merged = pd.merge(data_filtered, data_filtered, left_on='end_coord_zipped', right_on='start_coord_zipped', suffixes=('_current', '_next'))
CPU times: user 1min 47s, sys: 2min 4s, total: 3min 51s
Wall time: 5min 13s

The following step is to retain only those rows where the potentially subsequent trip actually starts after the first trip ends. This ensures a valid temporal sequence between the two trips. Additionally, we exclude any cases where a trip is linked to itself (which can happen is start time and end time of a trip coincide), which would result in endless loops.

In [64]:
merged = merged[(merged['STARTTIME_next'] >= merged['ENDTIME_current'])  & (merged['trip_index_current'] != merged['trip_index_next'])]
merged
Out[64]:
trip_index_current STARTTIME_current ENDTIME_current STARTLAT_current STARTLON_current ENDLAT_current ENDLON_current RENTAL_STATION_NAME_current RETURN_STATION_NAME_current trip_duration_minutes_current trip_distance_km_current start_coord_zipped_current end_coord_zipped_current assumed_bike_number_at_starttime_current assumed_bike_number_at_endtime_current trip_index_next STARTTIME_next ENDTIME_next STARTLAT_next STARTLON_next ENDLAT_next ENDLON_next RENTAL_STATION_NAME_next RETURN_STATION_NAME_next trip_duration_minutes_next trip_distance_km_next start_coord_zipped_next end_coord_zipped_next assumed_bike_number_at_starttime_next assumed_bike_number_at_endtime_next
0 0 2023-01-01 00:26:00 2023-01-01 00:51:00 48.13795 11.54569 48.16123 11.55782 NaN Barbarastr 25.0 2.741423 (48.13795, 11.54569) (48.16123, 11.55782) 0 0 32 2023-01-01 01:06:00 2023-01-01 01:20:00 48.16123 11.55782 48.15818 11.57227 Barbarastr NaN 14.0 1.127231 (48.16123, 11.55782) (48.15818, 11.57227) 1 0
1 0 2023-01-01 00:26:00 2023-01-01 00:51:00 48.13795 11.54569 48.16123 11.55782 NaN Barbarastr 25.0 2.741423 (48.13795, 11.54569) (48.16123, 11.55782) 0 0 391 2023-01-01 11:16:00 2023-01-01 11:37:00 48.16123 11.55782 48.13517 11.54300 Barbarastr NaN 21.0 3.100444 (48.16123, 11.55782) (48.13517, 11.543) 0 0
2 0 2023-01-01 00:26:00 2023-01-01 00:51:00 48.13795 11.54569 48.16123 11.55782 NaN Barbarastr 25.0 2.741423 (48.13795, 11.54569) (48.16123, 11.55782) 0 0 563 2023-01-01 13:44:00 2023-01-01 14:04:00 48.16123 11.55782 48.18738 11.51868 Barbarastr NaN 20.0 4.114442 (48.16123, 11.55782) (48.18738, 11.51868) -1 0
3 0 2023-01-01 00:26:00 2023-01-01 00:51:00 48.13795 11.54569 48.16123 11.55782 NaN Barbarastr 25.0 2.741423 (48.13795, 11.54569) (48.16123, 11.55782) 0 0 1287 2023-01-02 08:54:00 2023-01-02 09:04:00 48.16123 11.55782 48.16199 11.58654 Barbarastr Münchner Freiheit 10.0 2.138208 (48.16123, 11.55782) (48.16199, 11.58654) -2 0
4 0 2023-01-01 00:26:00 2023-01-01 00:51:00 48.13795 11.54569 48.16123 11.55782 NaN Barbarastr 25.0 2.741423 (48.13795, 11.54569) (48.16123, 11.55782) 0 0 3293 2023-01-04 08:35:00 2023-01-04 08:48:00 48.16123 11.55782 48.15664 11.58934 Barbarastr Thiemestraße 13.0 2.399856 (48.16123, 11.55782) (48.15664, 11.58934) -1 4
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
132104396 710047 2023-12-31 21:09:00 2023-12-31 21:15:00 48.24959 11.65319 48.25210 11.64625 Maibaumplatz Garching NaN 6.0 0.586104 (48.24959, 11.65319) (48.2521, 11.64625) -50 0 710075 2023-12-31 23:01:00 2023-12-31 23:04:00 48.25210 11.64625 48.25206 11.64628 NaN NaN 3.0 0.004975 (48.2521, 11.64625) (48.25206, 11.64628) 1 0
132104397 710048 2023-12-31 21:09:00 2023-12-31 21:17:00 48.13264 11.58896 48.12867 11.56932 NaN NaN 8.0 1.527130 (48.13264, 11.58896) (48.12867, 11.56932) 1 0 710053 2023-12-31 21:32:00 2023-12-31 21:46:00 48.12867 11.56932 48.11155 11.56616 NaN NaN 14.0 1.918102 (48.12867, 11.56932) (48.11155, 11.56616) 1 0
132106694 710057 2023-12-31 21:55:00 2023-12-31 22:05:00 48.13657 11.58007 48.13337 11.57633 NaN NaN 10.0 0.451770 (48.13657, 11.58007) (48.13337, 11.57633) 1 0 710061 2023-12-31 22:08:00 2023-12-31 22:28:00 48.13337 11.57633 48.13036 11.55299 NaN NaN 20.0 1.769258 (48.13337, 11.57633) (48.13036, 11.55299) 1 0
132106696 710061 2023-12-31 22:08:00 2023-12-31 22:28:00 48.13337 11.57633 48.13036 11.55299 NaN NaN 20.0 1.769258 (48.13337, 11.57633) (48.13036, 11.55299) 1 0 710073 2023-12-31 23:01:00 2023-12-31 23:15:00 48.13036 11.55299 48.12626 11.53909 NaN NaN 14.0 1.130697 (48.13036, 11.55299) (48.12626, 11.53909) 1 0
132106697 710062 2023-12-31 22:11:00 2023-12-31 22:32:00 48.13203 11.57647 48.13035 11.55298 Gärtnerplatz NaN 21.0 1.758451 (48.13203, 11.57647) (48.13035, 11.55298) -319 0 710074 2023-12-31 23:01:00 2023-12-31 23:15:00 48.13035 11.55298 48.12624 11.53905 NaN NaN 14.0 1.133190 (48.13035, 11.55298) (48.12624, 11.53905) 1 0

66469297 rows × 30 columns

At this point, each trip (identified by the trip_index column) may have been joined with multiple possible continuations. This is expected, as multiple trips can start from the same location over the course of the year. However, to maintain consistency with our assumptions, we retain only the earliest valid continuation for each trip.

In [65]:
df_next_trip = merged.loc[merged.groupby('trip_index_current')['STARTTIME_next'].idxmin()]
df_next_trip
Out[65]:
trip_index_current STARTTIME_current ENDTIME_current STARTLAT_current STARTLON_current ENDLAT_current ENDLON_current RENTAL_STATION_NAME_current RETURN_STATION_NAME_current trip_duration_minutes_current trip_distance_km_current start_coord_zipped_current end_coord_zipped_current assumed_bike_number_at_starttime_current assumed_bike_number_at_endtime_current trip_index_next STARTTIME_next ENDTIME_next STARTLAT_next STARTLON_next ENDLAT_next ENDLON_next RENTAL_STATION_NAME_next RETURN_STATION_NAME_next trip_duration_minutes_next trip_distance_km_next start_coord_zipped_next end_coord_zipped_next assumed_bike_number_at_starttime_next assumed_bike_number_at_endtime_next
0 0 2023-01-01 00:26:00 2023-01-01 00:51:00 48.13795 11.54569 48.16123 11.55782 NaN Barbarastr 25.0 2.741423 (48.13795, 11.54569) (48.16123, 11.55782) 0 0 32 2023-01-01 01:06:00 2023-01-01 01:20:00 48.16123 11.55782 48.15818 11.57227 Barbarastr NaN 14.0 1.127231 (48.16123, 11.55782) (48.15818, 11.57227) 1 0
705 1 2023-01-01 00:30:00 2023-01-01 00:42:00 48.12903 11.54431 48.14797 11.53445 NaN NaN 12.0 2.230186 (48.12903, 11.54431) (48.14797, 11.53445) 0 0 3464 2023-01-04 11:57:00 2023-01-04 12:06:00 48.14797 11.53445 48.14401 11.55768 NaN NaN 9.0 1.783849 (48.14797, 11.53445) (48.14401, 11.55768) 1 0
706 2 2023-01-01 00:32:00 2023-01-01 00:45:00 48.16841 11.55566 48.16467 11.57649 NaN NaN 13.0 1.604274 (48.16841, 11.55566) (48.16467, 11.57649) 0 0 1176 2023-01-02 02:02:00 2023-01-02 02:28:00 48.16467 11.57649 48.18308 11.61287 NaN NaN 26.0 3.392864 (48.16467, 11.57649) (48.18308, 11.61287) 1 0
707 3 2023-01-01 00:34:00 2023-01-01 00:46:00 48.16843 11.55567 48.16464 11.57648 NaN NaN 12.0 1.604289 (48.16843, 11.55567) (48.16464, 11.57648) 0 0 5227 2023-01-06 07:53:00 2023-01-06 08:06:00 48.16464 11.57648 48.17779 11.59179 NaN NaN 13.0 1.853300 (48.16464, 11.57648) (48.17779, 11.59179) 1 0
708 4 2023-01-01 00:35:00 2023-01-01 00:51:00 48.17104 11.54878 48.16243 11.53007 NaN NaN 16.0 1.689230 (48.17104, 11.54878) (48.16243, 11.53007) 0 0 11707 2023-01-11 14:59:00 2023-01-11 15:15:00 48.16243 11.53007 48.13932 11.52527 NaN NaN 16.0 2.594378 (48.16243, 11.53007) (48.13932, 11.52527) 1 0
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
132104396 710047 2023-12-31 21:09:00 2023-12-31 21:15:00 48.24959 11.65319 48.25210 11.64625 Maibaumplatz Garching NaN 6.0 0.586104 (48.24959, 11.65319) (48.2521, 11.64625) -50 0 710075 2023-12-31 23:01:00 2023-12-31 23:04:00 48.25210 11.64625 48.25206 11.64628 NaN NaN 3.0 0.004975 (48.2521, 11.64625) (48.25206, 11.64628) 1 0
132104397 710048 2023-12-31 21:09:00 2023-12-31 21:17:00 48.13264 11.58896 48.12867 11.56932 NaN NaN 8.0 1.527130 (48.13264, 11.58896) (48.12867, 11.56932) 1 0 710053 2023-12-31 21:32:00 2023-12-31 21:46:00 48.12867 11.56932 48.11155 11.56616 NaN NaN 14.0 1.918102 (48.12867, 11.56932) (48.11155, 11.56616) 1 0
132106694 710057 2023-12-31 21:55:00 2023-12-31 22:05:00 48.13657 11.58007 48.13337 11.57633 NaN NaN 10.0 0.451770 (48.13657, 11.58007) (48.13337, 11.57633) 1 0 710061 2023-12-31 22:08:00 2023-12-31 22:28:00 48.13337 11.57633 48.13036 11.55299 NaN NaN 20.0 1.769258 (48.13337, 11.57633) (48.13036, 11.55299) 1 0
132106696 710061 2023-12-31 22:08:00 2023-12-31 22:28:00 48.13337 11.57633 48.13036 11.55299 NaN NaN 20.0 1.769258 (48.13337, 11.57633) (48.13036, 11.55299) 1 0 710073 2023-12-31 23:01:00 2023-12-31 23:15:00 48.13036 11.55299 48.12626 11.53909 NaN NaN 14.0 1.130697 (48.13036, 11.55299) (48.12626, 11.53909) 1 0
132106697 710062 2023-12-31 22:11:00 2023-12-31 22:32:00 48.13203 11.57647 48.13035 11.55298 Gärtnerplatz NaN 21.0 1.758451 (48.13203, 11.57647) (48.13035, 11.55298) -319 0 710074 2023-12-31 23:01:00 2023-12-31 23:15:00 48.13035 11.55298 48.12624 11.53905 NaN NaN 14.0 1.133190 (48.13035, 11.55298) (48.12624, 11.53905) 1 0

654858 rows × 30 columns

Based on the df_next_trip dataframe let us now create a mapping dictionary that links each trip's trip_index to the trip_index of its next trip. Using this dictionary, we then add a new column to data_filtered that contains the index of the next trip for each entry — effectively establishing a link between consecutive trips.

In [66]:
map_dict_next_trip = df_next_trip[['trip_index_current', 'trip_index_next']].set_index('trip_index_current').to_dict()['trip_index_next']
In [67]:
data_filtered['NEXTTRIP'] = data_filtered['trip_index'].map(map_dict_next_trip)
data_filtered
Out[67]:
trip_index STARTTIME ENDTIME STARTLAT STARTLON ENDLAT ENDLON RENTAL_STATION_NAME RETURN_STATION_NAME trip_duration_minutes trip_distance_km start_coord_zipped end_coord_zipped assumed_bike_number_at_starttime assumed_bike_number_at_endtime NEXTTRIP
0 0 2023-01-01 00:26:00 2023-01-01 00:51:00 48.13795 11.54569 48.16123 11.55782 NaN Barbarastr 25.0 2.741423 (48.13795, 11.54569) (48.16123, 11.55782) 0 0 32.0
1 1 2023-01-01 00:30:00 2023-01-01 00:42:00 48.12903 11.54431 48.14797 11.53445 NaN NaN 12.0 2.230186 (48.12903, 11.54431) (48.14797, 11.53445) 0 0 3464.0
2 2 2023-01-01 00:32:00 2023-01-01 00:45:00 48.16841 11.55566 48.16467 11.57649 NaN NaN 13.0 1.604274 (48.16841, 11.55566) (48.16467, 11.57649) 0 0 1176.0
3 3 2023-01-01 00:34:00 2023-01-01 00:46:00 48.16843 11.55567 48.16464 11.57648 NaN NaN 12.0 1.604289 (48.16843, 11.55567) (48.16464, 11.57648) 0 0 5227.0
4 4 2023-01-01 00:35:00 2023-01-01 00:51:00 48.17104 11.54878 48.16243 11.53007 NaN NaN 16.0 1.689230 (48.17104, 11.54878) (48.16243, 11.53007) 0 0 11707.0
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
694652 710101 2023-12-31 23:52:00 2023-12-31 23:57:00 48.16719 11.55854 48.16917 11.55547 NaN NaN 5.0 0.317203 (48.16719, 11.55854) (48.16917, 11.55547) 1 0 NaN
694653 710102 2023-12-31 23:52:00 2024-01-01 00:07:00 48.17061 11.57391 48.17334 11.55952 NaN NaN 15.0 1.112501 (48.17061, 11.57391) (48.17334, 11.55952) 1 0 NaN
694654 710103 2023-12-31 23:53:00 2023-12-31 23:57:00 48.14131 11.56144 48.14094 11.56044 NaN NaN 4.0 0.085036 (48.14131, 11.56144) (48.14094, 11.56044) 1 0 NaN
694655 710104 2023-12-31 23:54:00 2023-12-31 23:59:00 48.12353 11.54494 48.12674 11.54758 NaN NaN 5.0 0.407460 (48.12353, 11.54494) (48.12674, 11.54758) 1 0 NaN
694656 710105 2023-12-31 23:54:00 2024-01-01 00:06:00 48.12376 11.54871 48.12376 11.54871 Kreisverwaltungsreferat Kreisverwaltungsreferat 12.0 0.000000 (48.12376, 11.54871) (48.12376, 11.54871) -403 -404 NaN

694657 rows × 16 columns

Similarly to merging next trip indices we also need to introduce an index of the previous trip in each row.

In [68]:
map_dict_previous_trip = df_next_trip[['trip_index_current', 'trip_index_next']].set_index('trip_index_next').to_dict()['trip_index_current']
data_filtered['PREVIOUSTRIP'] = data_filtered['trip_index'].map(map_dict_previous_trip)
data_filtered
Out[68]:
trip_index STARTTIME ENDTIME STARTLAT STARTLON ENDLAT ENDLON RENTAL_STATION_NAME RETURN_STATION_NAME trip_duration_minutes trip_distance_km start_coord_zipped end_coord_zipped assumed_bike_number_at_starttime assumed_bike_number_at_endtime NEXTTRIP PREVIOUSTRIP
0 0 2023-01-01 00:26:00 2023-01-01 00:51:00 48.13795 11.54569 48.16123 11.55782 NaN Barbarastr 25.0 2.741423 (48.13795, 11.54569) (48.16123, 11.55782) 0 0 32.0 NaN
1 1 2023-01-01 00:30:00 2023-01-01 00:42:00 48.12903 11.54431 48.14797 11.53445 NaN NaN 12.0 2.230186 (48.12903, 11.54431) (48.14797, 11.53445) 0 0 3464.0 NaN
2 2 2023-01-01 00:32:00 2023-01-01 00:45:00 48.16841 11.55566 48.16467 11.57649 NaN NaN 13.0 1.604274 (48.16841, 11.55566) (48.16467, 11.57649) 0 0 1176.0 NaN
3 3 2023-01-01 00:34:00 2023-01-01 00:46:00 48.16843 11.55567 48.16464 11.57648 NaN NaN 12.0 1.604289 (48.16843, 11.55567) (48.16464, 11.57648) 0 0 5227.0 NaN
4 4 2023-01-01 00:35:00 2023-01-01 00:51:00 48.17104 11.54878 48.16243 11.53007 NaN NaN 16.0 1.689230 (48.17104, 11.54878) (48.16243, 11.53007) 0 0 11707.0 NaN
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
694652 710101 2023-12-31 23:52:00 2023-12-31 23:57:00 48.16719 11.55854 48.16917 11.55547 NaN NaN 5.0 0.317203 (48.16719, 11.55854) (48.16917, 11.55547) 1 0 NaN 708599.0
694653 710102 2023-12-31 23:52:00 2024-01-01 00:07:00 48.17061 11.57391 48.17334 11.55952 NaN NaN 15.0 1.112501 (48.17061, 11.57391) (48.17334, 11.55952) 1 0 NaN 709933.0
694654 710103 2023-12-31 23:53:00 2023-12-31 23:57:00 48.14131 11.56144 48.14094 11.56044 NaN NaN 4.0 0.085036 (48.14131, 11.56144) (48.14094, 11.56044) 1 0 NaN 709984.0
694655 710104 2023-12-31 23:54:00 2023-12-31 23:59:00 48.12353 11.54494 48.12674 11.54758 NaN NaN 5.0 0.407460 (48.12353, 11.54494) (48.12674, 11.54758) 1 0 NaN 708486.0
694656 710105 2023-12-31 23:54:00 2024-01-01 00:06:00 48.12376 11.54871 48.12376 11.54871 Kreisverwaltungsreferat Kreisverwaltungsreferat 12.0 0.000000 (48.12376, 11.54871) (48.12376, 11.54871) -403 -404 NaN NaN

694657 rows × 17 columns

Next, we enforce our handling of ambiguity. Specifically, we set the next trip index to NaN if the trip ends at a station or if the assumed_bike_number_at_endtime is not equal to 0. Similarly, we set the previous trip index to NaN if the trip starts at a station or if the assumed_bike_number_at_starttime is not equal to 1. This ensures that only unambiguous connections between trips are retained in our reconstructed trip histories.

In [69]:
data_filtered.loc[data_filtered['RENTAL_STATION_NAME'].notna(), 'PREVIOUSTRIP'] = np.nan
data_filtered.loc[data_filtered['RETURN_STATION_NAME'].notna(), 'NEXTTRIP'] = np.nan
In [70]:
data_filtered.loc[data_filtered['assumed_bike_number_at_starttime'] != 1, 'PREVIOUSTRIP'] = np.nan
data_filtered.loc[data_filtered['assumed_bike_number_at_endtime'] != 0, 'NEXTTRIP'] = np.nan
In [71]:
%%time
trip_chains_list = []
data_filtered['trip_index_2'] = data_filtered['trip_index']
trip_index_to_row = data_filtered.set_index('trip_index_2').to_dict(orient='index')
trip_index_to_next = data_filtered.set_index('trip_index')['NEXTTRIP'].to_dict()


for trip_start in data_filtered[data_filtered['PREVIOUSTRIP'].isna()]['trip_index']:
    first_trip_start = trip_start
    lst = []
    trip_segment_number = 1
    while True:
        row_data = trip_index_to_row.get(trip_start)
        if (row_data is None) or ((row_data['assumed_bike_number_at_starttime'] != 1) and (trip_segment_number > 1)):
            break
        lst.append(row_data)
        next_trip = trip_index_to_next.get(trip_start)
        if pd.isna(next_trip):
            break
        trip_start = next_trip
        trip_segment_number += 1

    trip_chain = pd.DataFrame(lst)
    trip_chain['trip_chain_index'] = first_trip_start
    trip_chains_list.append(trip_chain)
CPU times: user 3min 12s, sys: 4.82 s, total: 3min 17s
Wall time: 3min 21s
In [72]:
all_trip_chains = pd.concat(trip_chains_list, ignore_index=True)
all_trip_chains.shape
Out[72]:
(694548, 18)

Let's make sure that each trip is assigned to one trip chain

In [73]:
all_trip_chains['trip_index'].value_counts(dropna=False)
Out[73]:
trip_index
710105    1
0         1
1         1
3464      1
710062    1
         ..
4988      1
4086      1
4041      1
3599      1
3548      1
Name: count, Length: 694548, dtype: int64

Let us now plot some trip chains

In [74]:
# The longest trip chain we could reconstruct consists of 65 trips
all_trip_chains.groupby('trip_chain_index')['trip_index'].count().sort_values(ascending=False)
Out[74]:
trip_chain_index
255855    65
502264    64
38459     63
565133    61
520450    60
          ..
709932     1
709929     1
709928     1
709927     1
145791     1
Name: trip_index, Length: 171597, dtype: int64
In [75]:
def plot_trip_chains(trip_chain_index):
    
    m = folium.Map(location=[data_filtered['STARTLAT'].mean(), data_filtered['STARTLON'].mean()], zoom_start=12)
    
    for _, row in all_trip_chains.query('trip_chain_index == @trip_chain_index').iterrows():
        q = folium.PolyLine(locations=[(row['STARTLAT'], row['STARTLON']),
                                   (row['ENDLAT'], row['ENDLON'])],
                        color='red', tooltip=row['trip_index']).add_to(m);
    display(HTML(f"<h3>Trip chain {trip_chain_index}</h3>"))
    display(m)
    
    sns.histplot(all_trip_chains.query('trip_chain_index == @trip_chain_index')['trip_duration_minutes'])
    plt.title(f'Distribution of Trip Duration for Trip Chain {trip_chain_index}');
    display(all_trip_chains.query('trip_chain_index == @trip_chain_index'))
In [76]:
plot_trip_chains(255855)
# the longest trip chain we could reconsrtuct

Trip chain 255855

Make this Notebook Trusted to load map: File -> Trust Notebook
trip_index STARTTIME ENDTIME STARTLAT STARTLON ENDLAT ENDLON RENTAL_STATION_NAME RETURN_STATION_NAME trip_duration_minutes trip_distance_km start_coord_zipped end_coord_zipped assumed_bike_number_at_starttime assumed_bike_number_at_endtime NEXTTRIP PREVIOUSTRIP trip_chain_index
259600 255855 2023-06-04 19:42:00 2023-06-04 19:51:00 48.14666 11.57032 48.15718 11.57123 NaN NaN 9.0 1.171711 (48.14666, 11.57032) (48.15718, 11.57123) 1 0 256084.0 NaN 255855
259601 256084 2023-06-04 21:15:00 2023-06-04 21:43:00 48.15718 11.57123 48.12156 11.57343 NaN NaN 28.0 3.964079 (48.15718, 11.57123) (48.12156, 11.57343) 1 0 257955.0 255855.0 255855
259602 257955 2023-06-05 18:24:00 2023-06-05 18:51:00 48.12156 11.57343 48.13695 11.58059 NaN NaN 27.0 1.792337 (48.12156, 11.57343) (48.13695, 11.58059) 1 0 258617.0 256084.0 255855
259603 258617 2023-06-05 22:42:00 2023-06-05 22:53:00 48.13695 11.58059 48.11798 11.57412 NaN NaN 11.0 2.163616 (48.13695, 11.58059) (48.11798, 11.57412) 1 0 259079.0 257955.0 255855
259604 259079 2023-06-06 08:53:00 2023-06-06 09:05:00 48.11798 11.57412 48.12043 11.60262 NaN NaN 12.0 2.139334 (48.11798, 11.57412) (48.12043, 11.60262) 1 0 260286.0 258617.0 255855
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
259660 340813 2023-06-29 20:41:00 2023-06-29 20:54:00 48.15744 11.59416 48.16072 11.56746 NaN NaN 13.0 2.019569 (48.15744, 11.59416) (48.16072, 11.56746) 1 0 341054.0 340254.0 255855
259661 341054 2023-06-29 21:41:00 2023-06-29 21:51:00 48.16072 11.56746 48.16756 11.55479 NaN NaN 10.0 1.211100 (48.16072, 11.56746) (48.16756, 11.55479) 1 0 341240.0 340813.0 255855
259662 341240 2023-06-29 22:23:00 2023-06-29 22:45:00 48.16756 11.55479 48.12598 11.54939 NaN NaN 22.0 4.640842 (48.16756, 11.55479) (48.12598, 11.54939) 1 0 344563.0 341054.0 255855
259663 344563 2023-07-01 13:39:00 2023-07-01 13:46:00 48.12598 11.54939 48.13285 11.56585 NaN NaN 7.0 1.443881 (48.12598, 11.54939) (48.13285, 11.56585) 1 0 344901.0 341240.0 255855
259664 344901 2023-07-01 15:03:00 2023-07-01 15:12:00 48.13285 11.56585 48.11968 11.54820 NaN Implerstraße 9.0 1.967455 (48.13285, 11.56585) (48.11968, 11.5482) 1 -257 NaN 344563.0 255855

65 rows × 18 columns

No description has been provided for this image
In [77]:
plot_trip_chains(629025)

Trip chain 629025

Make this Notebook Trusted to load map: File -> Trust Notebook
trip_index STARTTIME ENDTIME STARTLAT STARTLON ENDLAT ENDLON RENTAL_STATION_NAME RETURN_STATION_NAME trip_duration_minutes trip_distance_km start_coord_zipped end_coord_zipped assumed_bike_number_at_starttime assumed_bike_number_at_endtime NEXTTRIP PREVIOUSTRIP trip_chain_index
622467 629025 2023-10-21 10:49:00 2023-10-21 10:52:00 48.17869 11.59786 48.17792 11.60184 NaN NaN 3.0 0.308119 (48.17869, 11.59786) (48.17792, 11.60184) 0 0 629335.0 NaN 629025
622468 629335 2023-10-21 12:46:00 2023-10-21 12:50:00 48.17792 11.60184 48.17869 11.59785 NaN NaN 4.0 0.308833 (48.17792, 11.60184) (48.17869, 11.59785) 1 0 629526.0 629025.0 629025
622469 629526 2023-10-21 13:53:00 2023-10-21 14:44:00 48.17869 11.59785 48.10723 11.55143 NaN NaN 51.0 8.664319 (48.17869, 11.59785) (48.10723, 11.55143) 1 0 629815.0 629335.0 629025
622470 629815 2023-10-21 15:26:00 2023-10-21 15:32:00 48.10723 11.55143 48.11199 11.55411 NaN NaN 6.0 0.565652 (48.10723, 11.55143) (48.11199, 11.55411) 1 0 629885.0 629526.0 629025
622471 629885 2023-10-21 15:47:00 2023-10-21 15:59:00 48.11199 11.55411 48.12283 11.54722 NaN NaN 12.0 1.309956 (48.11199, 11.55411) (48.12283, 11.54722) 1 0 630034.0 629815.0 629025
622472 630034 2023-10-21 16:36:00 2023-10-21 16:53:00 48.12283 11.54722 48.13632 11.57500 NaN NaN 17.0 2.554636 (48.12283, 11.54722) (48.13632, 11.575) 1 0 630116.0 629885.0 629025
622473 630116 2023-10-21 17:06:00 2023-10-21 17:26:00 48.13632 11.57500 48.11762 11.54011 NaN NaN 20.0 3.327069 (48.13632, 11.575) (48.11762, 11.54011) 1 0 630212.0 630034.0 629025
622474 630212 2023-10-21 17:44:00 2023-10-21 17:51:00 48.11762 11.54011 48.11257 11.53183 NaN NaN 7.0 0.833911 (48.11762, 11.54011) (48.11257, 11.53183) 1 0 631085.0 630116.0 629025
622475 631085 2023-10-22 11:46:00 2023-10-22 12:05:00 48.11257 11.53183 48.09461 11.56161 NaN NaN 19.0 2.984477 (48.11257, 11.53183) (48.09461, 11.56161) 1 0 650056.0 630212.0 629025
622476 650056 2023-11-02 05:20:00 2023-11-02 05:44:00 48.09461 11.56161 48.10874 11.57478 NaN Wettersteinplatz 24.0 1.852197 (48.09461, 11.56161) (48.10874, 11.57478) 1 -179 NaN 631085.0 629025
No description has been provided for this image
In [78]:
# this trip chain is remarkable because it seems that the same bike was for at least half a month for commuting among (mainly) 4 locations.
plot_trip_chains(685353)

Trip chain 685353

Make this Notebook Trusted to load map: File -> Trust Notebook
trip_index STARTTIME ENDTIME STARTLAT STARTLON ENDLAT ENDLON RENTAL_STATION_NAME RETURN_STATION_NAME trip_duration_minutes trip_distance_km start_coord_zipped end_coord_zipped assumed_bike_number_at_starttime assumed_bike_number_at_endtime NEXTTRIP PREVIOUSTRIP trip_chain_index
675438 685353 2023-11-28 21:11:00 2023-11-28 21:15:00 48.17869 11.59786 48.17787 11.60182 NaN NaN 4.0 0.308289 (48.17869, 11.59786) (48.17787, 11.60182) 0 0 685487.0 NaN 685353
675439 685487 2023-11-29 07:37:00 2023-11-29 07:42:00 48.17787 11.60182 48.17870 11.59789 NaN NaN 5.0 0.306491 (48.17787, 11.60182) (48.1787, 11.59789) 1 0 686190.0 685353.0 685353
675440 686190 2023-11-29 17:47:00 2023-11-29 17:55:00 48.17870 11.59789 48.18252 11.59343 NaN NaN 8.0 0.538910 (48.1787, 11.59789) (48.18252, 11.59343) 1 0 686259.0 685487.0 685353
675441 686259 2023-11-29 18:12:00 2023-11-29 18:15:00 48.18252 11.59343 48.18231 11.59214 NaN NaN 3.0 0.098728 (48.18252, 11.59343) (48.18231, 11.59214) 1 0 686444.0 686190.0 685353
675442 686444 2023-11-29 19:34:00 2023-11-29 19:39:00 48.18231 11.59214 48.18208 11.59382 NaN NaN 5.0 0.127519 (48.18231, 11.59214) (48.18208, 11.59382) 1 0 686466.0 686259.0 685353
675443 686466 2023-11-29 19:51:00 2023-11-29 19:59:00 48.18208 11.59382 48.17870 11.59785 NaN NaN 8.0 0.480694 (48.18208, 11.59382) (48.1787, 11.59785) 1 0 686548.0 686444.0 685353
675444 686548 2023-11-29 21:10:00 2023-11-29 21:14:00 48.17870 11.59785 48.17787 11.60182 NaN NaN 4.0 0.309329 (48.1787, 11.59785) (48.17787, 11.60182) 1 0 686746.0 686466.0 685353
675445 686746 2023-11-30 07:10:00 2023-11-30 07:14:00 48.17787 11.60182 48.17871 11.59789 NaN NaN 4.0 0.306828 (48.17787, 11.60182) (48.17871, 11.59789) 1 0 687226.0 686548.0 685353
675446 687226 2023-11-30 17:28:00 2023-11-30 17:37:00 48.17871 11.59789 48.18220 11.59216 NaN NaN 9.0 0.576338 (48.17871, 11.59789) (48.1822, 11.59216) 1 0 687350.0 686746.0 685353
675447 687350 2023-11-30 19:40:00 2023-11-30 19:52:00 48.18220 11.59216 48.17875 11.59784 NaN NaN 12.0 0.570594 (48.1822, 11.59216) (48.17875, 11.59784) 1 0 687380.0 687226.0 685353
675448 687380 2023-11-30 21:11:00 2023-11-30 21:15:00 48.17875 11.59784 48.17789 11.60183 NaN NaN 4.0 0.311756 (48.17875, 11.59784) (48.17789, 11.60183) 1 0 687509.0 687350.0 685353
675449 687509 2023-12-01 07:31:00 2023-12-01 07:35:00 48.17789 11.60183 48.17868 11.59787 NaN NaN 4.0 0.307319 (48.17789, 11.60183) (48.17868, 11.59787) 1 0 688631.0 687380.0 685353
675450 688631 2023-12-05 17:42:00 2023-12-05 17:54:00 48.17868 11.59787 48.18256 11.59351 NaN NaN 12.0 0.539683 (48.17868, 11.59787) (48.18256, 11.59351) 1 0 688641.0 687509.0 685353
675451 688641 2023-12-05 18:00:00 2023-12-05 18:03:00 48.18256 11.59351 48.18228 11.59209 NaN NaN 3.0 0.110088 (48.18256, 11.59351) (48.18228, 11.59209) 1 0 688721.0 688631.0 685353
675452 688721 2023-12-05 19:44:00 2023-12-05 19:57:00 48.18228 11.59209 48.17870 11.59783 NaN NaN 13.0 0.583666 (48.18228, 11.59209) (48.1787, 11.59783) 1 0 689117.0 688641.0 685353
675453 689117 2023-12-06 16:04:00 2023-12-06 16:14:00 48.17870 11.59783 48.18221 11.59210 NaN NaN 10.0 0.577838 (48.1787, 11.59783) (48.18221, 11.5921) 1 0 689172.0 688721.0 685353
675454 689172 2023-12-06 17:38:00 2023-12-06 17:42:00 48.18221 11.59210 48.18258 11.59358 NaN NaN 4.0 0.117494 (48.18221, 11.5921) (48.18258, 11.59358) 1 0 689194.0 689117.0 685353
675455 689194 2023-12-06 17:57:00 2023-12-06 18:06:00 48.18258 11.59358 48.17871 11.59791 NaN NaN 9.0 0.537455 (48.18258, 11.59358) (48.17871, 11.59791) 1 0 689846.0 689172.0 685353
675456 689846 2023-12-07 17:29:00 2023-12-07 17:40:00 48.17871 11.59791 48.18223 11.59210 NaN NaN 11.0 0.582984 (48.17871, 11.59791) (48.18223, 11.5921) 1 0 689986.0 689194.0 685353
675457 689986 2023-12-07 19:48:00 2023-12-07 19:58:00 48.18223 11.59210 48.17869 11.59782 NaN NaN 10.0 0.579550 (48.18223, 11.5921) (48.17869, 11.59782) 1 0 690554.0 689846.0 685353
675458 690554 2023-12-08 17:27:00 2023-12-08 17:39:00 48.17869 11.59782 48.18246 11.59232 NaN NaN 12.0 0.585674 (48.17869, 11.59782) (48.18246, 11.59232) 1 0 690625.0 689986.0 685353
675459 690625 2023-12-08 19:31:00 2023-12-08 19:41:00 48.18246 11.59232 48.17869 11.59784 NaN NaN 10.0 0.586714 (48.18246, 11.59232) (48.17869, 11.59784) 1 0 692621.0 690554.0 685353
675460 692621 2023-12-11 17:08:00 2023-12-11 17:26:00 48.17869 11.59784 48.18233 11.59213 NaN NaN 18.0 0.586620 (48.17869, 11.59784) (48.18233, 11.59213) 1 0 692840.0 690625.0 685353
675461 692840 2023-12-11 18:51:00 2023-12-11 18:59:00 48.18233 11.59213 48.17877 11.59777 NaN NaN 8.0 0.576721 (48.18233, 11.59213) (48.17877, 11.59777) 1 0 692990.0 692621.0 685353
675462 692990 2023-12-11 21:06:00 2023-12-11 21:09:00 48.17877 11.59777 48.17787 11.60182 NaN NaN 3.0 0.317380 (48.17877, 11.59777) (48.17787, 11.60182) 1 0 693143.0 692840.0 685353
675463 693143 2023-12-12 07:15:00 2023-12-12 07:19:00 48.17787 11.60182 48.17875 11.59782 NaN NaN 4.0 0.313152 (48.17787, 11.60182) (48.17875, 11.59782) 1 0 693779.0 692990.0 685353
675464 693779 2023-12-12 17:45:00 2023-12-12 17:53:00 48.17875 11.59782 48.18230 11.59210 NaN NaN 8.0 0.580305 (48.17875, 11.59782) (48.1823, 11.5921) 1 1 NaN 693143.0 685353
No description has been provided for this image

Idle duration¶

With reliable trip histories in place, we can now estimate bike idle times. In particular, very long idle periods may indicate technical defects and, at the same time, suggest possible inefficiencies in detecting and responding to those issues on the part of the bikesharing operator. Extended idle times might also reflect situations where operator staff are unable to access the idle bikes, for example due to their location or other logistical constraints.

As a first step, we perform a self-merge of the all_trip_chains dataframe. This allows us to attach, for each trip, the start time of the next trip in the chain, as well as the assumed number of bikes present at the location at that time. This additional information enables us to exclude cases where the bike used for the subsequent trip is ambiguous — that is, when more than one bike is present at the location at the start of the next trip.

In [79]:
all_trip_chains_next_trip_start = all_trip_chains.merge(all_trip_chains[['trip_index', 'STARTTIME', 'assumed_bike_number_at_starttime']], how='left', left_on='NEXTTRIP', right_on='trip_index', suffixes=('_current', '_next'))
all_trip_chains_next_trip_start.loc[all_trip_chains_next_trip_start['assumed_bike_number_at_starttime_next'] != 1, 'STARTTIME_next'] = pd.NaT
all_trip_chains_next_trip_start.drop(columns=['trip_index_next'], inplace=True)
all_trip_chains_next_trip_start.rename(columns={'STARTTIME_next': 'next_trip_start'}, inplace=True)
In [80]:
all_trip_chains_next_trip_start['idle_time_to_next_trip'] = all_trip_chains_next_trip_start['next_trip_start'].sub(all_trip_chains_next_trip_start['ENDTIME'])
In [81]:
# let's convert seconds to days
all_trip_chains_next_trip_start[ 'idle_time_to_next_trip_days'] = \
    all_trip_chains_next_trip_start['idle_time_to_next_trip'].dt.total_seconds() / 60 / 60 / 24

Let's examine the distribution of the idle time (in minutes) similar to what we did for trip durations. We'll visualize this again using a boxplot and a histogram.

In [82]:
fig, axes = plt.subplots(1, 2, figsize=(12, 6))  # 1 row, 2 columns

sns.boxplot(all_trip_chains_next_trip_start['idle_time_to_next_trip_days'], ax=axes[0])
axes[0].set_title('All idle time distribution')

sns.boxplot(all_trip_chains_next_trip_start['idle_time_to_next_trip_days'], ax=axes[1])
axes[1].set_ylim(0, 10)
axes[1].set_title('Idle time below 10 days')

plt.tight_layout();
No description has been provided for this image

According to the standard boxplot settings, outliers (as depicted by empty circles) are the points extending beyond 1.5 times the interquartile range (from 0.05 to 0.79 days) added on both ends of this range. This means that all trip durations above 1.9 days are considered outliers.

In [83]:
# for approx. 24% of all trips  the idle time can be estimated realiably
all_trip_chains_next_trip_start['idle_time_to_next_trip_days'].isna().mean()
Out[83]:
np.float64(0.247062837989599)
In [84]:
all_trip_chains_next_trip_start[['idle_time_to_next_trip_days']].describe()
Out[84]:
idle_time_to_next_trip_days
count 522951.000000
mean 0.891344
std 2.632067
min 0.000000
25% 0.052778
50% 0.227083
75% 0.786111
max 271.376389

For the histogram let's consider only idle times below 10 days so that the shape of the distribution can be recognised better

In [85]:
sns.histplot(all_trip_chains_next_trip_start.query('idle_time_to_next_trip_days < 10')['idle_time_to_next_trip_days']);
plt.title('Idle time below 10 days');
No description has been provided for this image

We see that this distribution again probably belongs to the exponential distribution family with shorter idle times being more frequent and very long idle times being increasingly rare. This agrees with common sense. Nevertheless, it is worth examining those long idle periods more closely — particularly to understand where they occurred. Notably, idle times exceeding 10 days account for approximately 1% of all reliably estimated idle durations.

In [86]:
all_trip_chains_next_trip_start.query('idle_time_to_next_trip_days >= 10').shape[0] / all_trip_chains_next_trip_start['idle_time_to_next_trip_days'].dropna().shape[0]
Out[86]:
0.01055548225359546

Now let's plot all especially long idle times (over 30 days) on the map

In [87]:
%%time
# Aggregate the data by start location
long_idle_time = all_trip_chains_next_trip_start.query('idle_time_to_next_trip_days > 30').copy()

# Normalize the count to scale marker sizes (optional)
max_time = long_idle_time['idle_time_to_next_trip_days'].max()
long_idle_time['size'] = long_idle_time['idle_time_to_next_trip_days'] / max_time * 10 
#agg_data


# Create a Folium map
m = folium.Map(location=[data_filtered['STARTLAT'].mean(), data_filtered['STARTLON'].mean()], zoom_start=12)

# Add markers to the map
for i, row in long_idle_time.iterrows():
    _ = folium.CircleMarker(
        location=[row['ENDLAT'], row['ENDLON']],
        radius=row['size'], 
        color='blue',
        fill=True,
        fill_color='blue',
        fill_opacity=0.6,
        popup=f"Trip {row['trip_index_current']}, {row['idle_time_to_next_trip_days']} days"
    ).add_to(m);

display(HTML("<h3>Locations With Exceptionally Long Bike Idle Times</h3>"))
m

Locations With Exceptionally Long Bike Idle Times

CPU times: user 93.8 ms, sys: 5.09 ms, total: 98.9 ms
Wall time: 101 ms
Out[87]:
Make this Notebook Trusted to load map: File -> Trust Notebook

The findings present a mixed picture. On the one hand, there are cases of long idle times where bikes remained unused for extended periods in high-traffic areas, such as busy streets or intersections. Examples include:

  • Trip 329973 at the Arnulfstraße / Zirkus-Krone-Straße intersection
  • Trip 107035, located just next to Hauptbahnhof (the main station)
  • Trip 285787 on Lindwurmstraße Such cases are unexpected, as these are locations where bikes are typically in high demand. If the data is accurate, this may indicate technical issues that prevented the bikes from being used and should have triggered a response from the bikesharing operator. However, given the limitations of the dataset, we must also acknowledge the possibility that these are false positives resulting from reconstruction inaccuracies.

On the other hand, there are more plausible cases where bikes remained idle in backyards or locations with limited public access, including:

  • Trip 241180 in the backyard of the Informatics Institute of LMU
  • Trip 29908 in the inner courtyard of a Salesianer Don Bosco building
  • Trip 349735 on the Hochschule Fresenius campus
  • Trip 59512 in the inner courtyard of Krankenhaus der Barmherzigen Brüder These cases seem more likely and could indicate misuse or violations of the service’s terms of use. In such situations, it may be worth considering operational intervention to prevent bikes from being parked in locations that are difficult to access or not intended for public drop-off.
In [88]:
#   One example of an especially long idle time
ind = 29908
all_trip_chains_next_trip_start.query('trip_index_current == @ind')
start_and_end_events.query('trip_index == [@ind]')
start_and_end_events[start_and_end_events['location'] == (48.12469, 11.59479)]
Out[88]:
trip_index_current STARTTIME_current ENDTIME STARTLAT STARTLON ENDLAT ENDLON RENTAL_STATION_NAME RETURN_STATION_NAME trip_duration_minutes trip_distance_km start_coord_zipped end_coord_zipped assumed_bike_number_at_starttime_current assumed_bike_number_at_endtime NEXTTRIP PREVIOUSTRIP trip_chain_index next_trip_start assumed_bike_number_at_starttime_next idle_time_to_next_trip idle_time_to_next_trip_days
29313 29908 2023-01-29 13:11:00 2023-01-29 13:18:00 48.1172 11.60153 48.12469 11.59479 NaN NaN 7.0 0.972323 (48.1172, 11.60153) (48.12469, 11.59479) 1 0 636403.0 29853.0 23580 2023-10-25 10:58:00 1.0 268 days 21:40:00 268.902778
Out[88]:
trip_index timestamp location kind n_starts_before n_ends_before assumed_bike_number_at_timestamp
58271 29908 2023-01-29 13:11:00 (48.1172, 11.60153) start 0 1 1
58282 29908 2023-01-29 13:18:00 (48.12469, 11.59479) end 0 0 0
Out[88]:
trip_index timestamp location kind n_starts_before n_ends_before assumed_bike_number_at_timestamp
58282 29908 2023-01-29 13:18:00 (48.12469, 11.59479) end 0 0 0
1245165 636403 2023-10-25 10:58:00 (48.12469, 11.59479) start 0 1 1

Trip count prediction¶

Let us now attempt to model the number of hourly trip starts (referred to more generally as trip counts) in 15 grid cells located in the city centre. (We use the same grid we defined earlier to analyse trip start and end density and trip flows.) To do this, we will derive several indicator variables from the available data, including:

  • Month
  • Day of the week
  • Weekend and holiday flags
  • Hour of the day In addition to these internal features, we will incorporate external data from two sources:
  • Strike data: Information on public transport strikes, which we previously identified as significant outliers in trip activity. We now use this information explicitly as an explanatory variable.
  • Weather data: We retrieve historical hourly weather data for Munich from open-meteo (https://open-meteo.com), including temperature, precipitation, and wind speed. The choice of these explanatory variables is both intuitive and minimal — they likely form the simplest meaningful feature set for predicting trip counts. This makes it particularly interesting to explore how well we can model trip activity using only these straightforward, easily accessible indicators.
In [89]:
def get_weather_data(start_date, end_date, lat, lon):
    # Setup the Open-Meteo API client with cache and retry on error
    cache_session = requests_cache.CachedSession('.cache', expire_after = -1)
    retry_session = retry(cache_session, retries = 5, backoff_factor = 0.2)
    openmeteo = openmeteo_requests.Client(session = retry_session)
    
    # Make sure all required weather variables are listed here
    # The order of variables in hourly or daily is important to assign them correctly below
    url = "https://archive-api.open-meteo.com/v1/archive"
    params = {
    	"latitude": lat,
    	"longitude": lon,
    	"start_date": start_date,
    	"end_date": end_date,
    	"hourly": ["temperature_2m", "precipitation", "wind_speed_10m"],
    	"timezone": "Europe/Berlin"
    }
    responses = openmeteo.weather_api(url, params=params)
    
    response = responses[0]
    print(f"Coordinates {response.Latitude()}°N {response.Longitude()}°E")
    print(f"Elevation {response.Elevation()} m asl")
    print(f"Timezone {response.Timezone()} {response.TimezoneAbbreviation()}")
    print(f"Timezone difference to GMT+0 {response.UtcOffsetSeconds()} s")
    
    # Process hourly data. The order of variables needs to be the same as requested.
    hourly = response.Hourly()
    hourly_temperature_2m = hourly.Variables(0).ValuesAsNumpy()
    hourly_precipitation = hourly.Variables(1).ValuesAsNumpy()
    hourly_wind_speed_10m = hourly.Variables(2).ValuesAsNumpy()
    
    hourly_data = {"date": pd.date_range(
    	start = pd.to_datetime(hourly.Time(), unit="s"),
    	end = pd.to_datetime(hourly.TimeEnd(), unit="s"),
    	freq = pd.Timedelta(seconds=hourly.Interval()),
    	inclusive = "left"
    )}
    hourly_data["temperature_2m"] = hourly_temperature_2m
    hourly_data["precipitation"] = hourly_precipitation
    hourly_data["wind_speed_10m"] = hourly_wind_speed_10m
    
    hourly_dataframe = pd.DataFrame(data=hourly_data)
    return hourly_dataframe
In [90]:
# we retrieve the weather data for 2023 using the code suggested on https://open-meteo.com
weather_2023_df = get_weather_data("2023-01-01", "2024-01-01", 48.1374, 11.5755)
weather_2023_df
Coordinates 48.1195068359375°N 11.550000190734863°E
Elevation 524.0 m asl
Timezone b'Europe/Berlin' b'GMT+2'
Timezone difference to GMT+0 7200 s
Out[90]:
date temperature_2m precipitation wind_speed_10m
0 2022-12-31 22:00:00 7.4215 0.0 12.722830
1 2022-12-31 23:00:00 7.6215 0.0 12.245293
2 2023-01-01 00:00:00 7.5715 0.0 11.726277
3 2023-01-01 01:00:00 7.6215 0.0 13.237039
4 2023-01-01 02:00:00 7.6215 0.0 10.691453
... ... ... ... ...
8779 2024-01-01 17:00:00 4.4215 0.0 17.227421
8780 2024-01-01 18:00:00 4.1215 0.0 17.906157
8781 2024-01-01 19:00:00 4.0215 0.0 18.359999
8782 2024-01-01 20:00:00 4.0215 0.0 17.906157
8783 2024-01-01 21:00:00 3.5215 0.0 16.485485

8784 rows × 4 columns

When preparing the data for modelling, we perform the following steps:

  • Filter the dataset to include only trips that started within the 15 central grid cells
  • Aggregate the number of trip starts per hour
  • Merge the bike trip data with the weather data based on the corresponding time intervals
  • Generate categorical variables for key temporal and contextual factors, including:
    • Hour of the day
    • Day of the week
    • Month
    • Whether a public transport strike occurred on that day
    • Whether the day was a weekend or a holiday
In [91]:
def prepare_data_for_modelling(grid, geometry, bikes_data, weather_data, holiday_dates, strike_dates, start, end):
    data_for_modelling = gpd.sjoin(gpd.GeoDataFrame(bikes_data[['STARTLON', 'STARTLAT', 'STARTTIME']], geometry=geometry,crs='EPSG:4326'), grid, how='left', predicate='within')
    data_for_modelling = data_for_modelling[data_for_modelling['index_right'].isin([1418, 1417, 1416, 1439, 1438, 1437, 1500, 1499, 1498, 1541, 1540, 1539, 1582, 1581, 1580])]
    hourly_aggregated_starts = data_for_modelling.resample('1h', on='STARTTIME')['STARTTIME'].count()

    merged = weather_data.join(hourly_aggregated_starts, on='date', how='right')
    merged = merged[(merged['date'] >= start) & (merged['date'] <= end)]
    merged.rename(columns={'STARTTIME': 'trip_start_count'}, inplace=True)
    merged.fillna({'trip_start_count': 0}, inplace=True)
    merged['hour'] = merged['date'].dt.hour
    merged['day_of_week'] = merged['date'].dt.dayofweek
    merged['month'] = merged['date'].dt.month
    merged['is_weekend_or_holiday'] = (merged['day_of_week'].isin([5, 6]) | merged['date'].dt.date.isin(holiday_dates)).astype(int)
    merged['is_strike'] = merged['date'].dt.date.isin(strike_dates).astype(int)
    # merged = merged.drop(columns=['date'])
    return merged
In [92]:
holiday_dates_2023 = pd.to_datetime([
    "2023-01-01",
    "2023-01-06",
    "2023-04-07",
    "2023-04-10",
    "2023-05-01",
    "2023-05-18",
    "2023-05-29",
    "2023-06-08",
    "2023-08-15",
    "2023-10-03",
    "2023-11-01",
    "2023-12-25",
    "2023-12-26"
]).date
In [93]:
strikes_dates_2023 = pd.to_datetime([
    "2023-03-02",
    "2023-03-03",
    "2023-03-27",
    "2023-05-19"
]).date
In [94]:
data_for_modelling = prepare_data_for_modelling(grid, starts_geometry, data_filtered, weather_2023_df, holiday_dates_2023, strikes_dates_2023, '2023-01-01', '2024-01-01')
data_for_modelling
Out[94]:
date temperature_2m precipitation wind_speed_10m trip_start_count hour day_of_week month is_weekend_or_holiday is_strike
2 2023-01-01 00:00:00 7.5715 0.0 11.726277 1 0 6 1 1 0
3 2023-01-01 01:00:00 7.6215 0.0 13.237039 15 1 6 1 1 0
4 2023-01-01 02:00:00 7.6215 0.0 10.691453 12 2 6 1 1 0
5 2023-01-01 03:00:00 6.9715 0.0 10.195057 15 3 6 1 1 0
6 2023-01-01 04:00:00 6.8715 0.0 10.972620 13 4 6 1 1 0
... ... ... ... ... ... ... ... ... ... ...
8757 2023-12-31 19:00:00 3.5215 1.0 16.746773 2 19 6 12 1 0
8758 2023-12-31 20:00:00 2.9215 0.4 14.336861 3 20 6 12 1 0
8759 2023-12-31 21:00:00 2.8715 0.1 13.237038 5 21 6 12 1 0
8760 2023-12-31 22:00:00 2.8215 0.1 14.529915 4 22 6 12 1 0
8761 2023-12-31 23:00:00 2.9215 0.1 13.783817 9 23 6 12 1 0

8760 rows × 10 columns

Basic statistical metrics of the features look plausible.

In [95]:
data_for_modelling.describe()
Out[95]:
date temperature_2m precipitation wind_speed_10m trip_start_count hour day_of_week month is_weekend_or_holiday is_strike
count 8760 8760.000000 8760.000000 8760.000000 8760.000000 8760.000000 8760.000000 8760.000000 8760.000000 8760.000000
mean 2023-07-02 11:30:00 10.689720 0.131016 11.269616 23.502169 11.500000 3.008219 6.526027 0.320548 0.010959
min 2023-01-01 00:00:00 -13.628500 0.000000 0.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000
25% 2023-04-02 05:45:00 4.621500 0.000000 6.763786 6.000000 5.750000 1.000000 4.000000 0.000000 0.000000
50% 2023-07-02 11:30:00 10.171500 0.000000 9.686609 17.000000 11.500000 3.000000 7.000000 0.000000 0.000000
75% 2023-10-01 17:15:00 16.671501 0.000000 14.118243 34.000000 17.250000 5.000000 10.000000 1.000000 0.000000
max 2023-12-31 23:00:00 33.871498 13.700000 43.850594 224.000000 23.000000 6.000000 12.000000 1.000000 1.000000
std NaN 8.087231 0.459431 6.596209 22.653266 6.922582 2.003519 3.448048 0.466714 0.104116

Let us visualise some example data. The chart below displays precipitation, temperature, and trip counts over several summer days. Although not all rainy hours occurred during daytime, there are several instances where declines in trip counts appear to coincide with rainfall. This suggests that rain may have negatively impacted bike usage in some cases.

In [96]:
fig = plt.figure(figsize=(15, 8));
sns.lineplot(data_for_modelling[['date', 'trip_start_count', 'precipitation', 'temperature_2m']].query('date > "2023-08-26" & date < "2023-08-31"').set_index('date'));
plt.title('How Rain and Temperature Affected Trip Starts')
Out[96]:
<Axes: xlabel='date'>
Out[96]:
Text(0.5, 1.0, 'How Rain and Temperature Affected Trip Starts')
No description has been provided for this image

We will use the Xgboost regressor to mode the number of trips. First we will choose the best model by choosing hyperpameters doing a GridSearch. We will define how each specification performs by doing a cross validation. After choosing the best model and estimating it on the whole dataset for 2023, we wil try predicting the hourly number of trips in 2022 in the same 15 grid cells.

It feels wrong to use older data to test the predictive power of the model, but in a way the dataset of 2022 is indeed an independent test set like many others. We don't use any lagged variables, no feature in the training dataset is related to 2022.

Two more arguments:

  • Unfortunately, the data for 2024 is not avaialble at the time of writing.
  • In year 2022 nore major public transport strikes took place. While this is good for testing, if we trained the model on 2022 data, the model wouldn't know what strikes are and what effect this feature can have on the target variable.

We use an XGBoost regressor to model the number of trips. As a first step, we perform a GridSearch to select the best combination of hyperparameters, using cross-validation to evaluate model performance for each specification. Once the best model is selected, we train it on the entire dataset for 2023, and then use it to predict the hourly number of trips in 2022 across the same 15 central grid cells. At first glance, using older data as a test set may seem counterintuitive. However, in this case, the 2022 dataset serves as a valid out-of-sample test set, similar to many real-world scenarios. Importantly, we do not use any lagged variables, and no feature in the training data is tied to values from 2022. Two additional points support this choice:

  • Data for 2024 is unfortunately not available at the time of writing.
  • No major public transport strikes occurred in 2022. While this makes 2022 suitable for testing model generalisability under more “normal” conditions, training on 2022 alone would have been problematic — the model would lack knowledge of strike events and would not learn how this feature affects trip counts.
In [97]:
groups = data_for_modelling['date'].dt.date  
cv = GroupKFold(n_splits=5, shuffle=True, random_state=42)

# Define features and target
X = data_for_modelling.drop(['trip_start_count', 'date'], axis=1)
y = data_for_modelling['trip_start_count']

We use the GroupKFold strategy for cross-validation, with groups defined by calendar days, to avoid data leakage across time. This ensures that all observations from the same day are kept together — either entirely in the training set or entirely in the validation set. This approach is particularly important in our case because strike events affect entire days. If data from a strike day were split between training and validation, the model could indirectly “learn” about the strike from the training set (effectively not being able to attribute it to the relevant feature - strike indicator variable, but rather memorising a particular combination of all feature values), leading to overly optimistic performance estimates. Grouping by day prevents this kind of leakage and yields a more realistic evaluation of the model's generalisation performance.

In [98]:
y
Out[98]:
2        1
3       15
4       12
5       15
6       13
        ..
8757     2
8758     3
8759     5
8760     4
8761     9
Name: trip_start_count, Length: 8760, dtype: int64
In [99]:
X
Out[99]:
temperature_2m precipitation wind_speed_10m hour day_of_week month is_weekend_or_holiday is_strike
2 7.5715 0.0 11.726277 0 6 1 1 0
3 7.6215 0.0 13.237039 1 6 1 1 0
4 7.6215 0.0 10.691453 2 6 1 1 0
5 6.9715 0.0 10.195057 3 6 1 1 0
6 6.8715 0.0 10.972620 4 6 1 1 0
... ... ... ... ... ... ... ... ...
8757 3.5215 1.0 16.746773 19 6 12 1 0
8758 2.9215 0.4 14.336861 20 6 12 1 0
8759 2.8715 0.1 13.237038 21 6 12 1 0
8760 2.8215 0.1 14.529915 22 6 12 1 0
8761 2.9215 0.1 13.783817 23 6 12 1 0

8760 rows × 8 columns

In [100]:
%%time

xgb = XGBRegressor(objective='reg:squarederror', random_state=42)

param_grid = {
    'regressor__n_estimators': [100, 200],
    'regressor__max_depth': [3, 5, 7],
    'regressor__learning_rate': [0.01, 0.1],
    'regressor__subsample': [0.8, 1.0],
    'regressor__colsample_bytree': [0.8, 1.0],
    'regressor__reg_alpha': [0, 0.1],
    'regressor__reg_lambda': [1, 10]
}


# Wrap XGBoost with target transformation
model = TransformedTargetRegressor(
    regressor=xgb,
    func=np.log1p,     # transform y before fitting
    inverse_func=np.expm1  # inverse transform ŷ before scoring
)

grid_search = GridSearchCV(
    estimator=model,
    param_grid=param_grid,
    cv=cv,
    scoring='r2',
    return_train_score=True,
    verbose=2,
    n_jobs=-1
)

grid_search.fit(X, y, groups=groups)

print("Best parameters:", grid_search.best_params_)
print("Best CV R² score:", grid_search.best_score_)
Fitting 5 folds for each of 192 candidates, totalling 960 fits
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.7s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.7s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.7s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.7s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.7s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.7s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.7s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.7s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.7s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.6s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.5s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.6s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.5s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.6s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.5s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.5s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=0.8, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.5s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4sBest parameters: {'regressor__colsample_bytree': 1.0, 'regressor__learning_rate': 0.1, 'regressor__max_depth': 5, 'regressor__n_estimators': 200, 'regressor__reg_alpha': 0, 'regressor__reg_lambda': 10, 'regressor__subsample': 0.8}
Best CV R² score: 0.8304430365562439
CPU times: user 4.5 s, sys: 697 ms, total: 5.2 s
Wall time: 47.5 s
In [101]:
# choosing the best model and training it on the whole 2023 dataset
best_model = grid_search.best_estimator_
best_model.fit(X, y)
Out[101]:
TransformedTargetRegressor(func=<ufunc 'log1p'>, inverse_func=<ufunc 'expm1'>,
                           regressor=XGBRegressor(base_score=None, booster=None,
                                                  callbacks=None,
                                                  colsample_bylevel=None,
                                                  colsample_bynode=None,
                                                  colsample_bytree=1.0,
                                                  device=None,
                                                  early_stopping_rounds=None,
                                                  enable_categorical=False,
                                                  eval_metric=None,
                                                  feature_types=None,
                                                  feature_weights=None,
                                                  gamma=None, grow_policy=None,
                                                  importance_type=None,
                                                  interaction_constraints=None,
                                                  learning_rate=0.1,
                                                  max_bin=None,
                                                  max_cat_threshold=None,
                                                  max_cat_to_onehot=None,
                                                  max_delta_step=None,
                                                  max_depth=5, max_leaves=None,
                                                  min_child_weight=None,
                                                  missing=nan,
                                                  monotone_constraints=None,
                                                  multi_strategy=None,
                                                  n_estimators=200, n_jobs=None,
                                                  num_parallel_tree=None, ...))
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
TransformedTargetRegressor(func=<ufunc 'log1p'>, inverse_func=<ufunc 'expm1'>,
                           regressor=XGBRegressor(base_score=None, booster=None,
                                                  callbacks=None,
                                                  colsample_bylevel=None,
                                                  colsample_bynode=None,
                                                  colsample_bytree=1.0,
                                                  device=None,
                                                  early_stopping_rounds=None,
                                                  enable_categorical=False,
                                                  eval_metric=None,
                                                  feature_types=None,
                                                  feature_weights=None,
                                                  gamma=None, grow_policy=None,
                                                  importance_type=None,
                                                  interaction_constraints=None,
                                                  learning_rate=0.1,
                                                  max_bin=None,
                                                  max_cat_threshold=None,
                                                  max_cat_to_onehot=None,
                                                  max_delta_step=None,
                                                  max_depth=5, max_leaves=None,
                                                  min_child_weight=None,
                                                  missing=nan,
                                                  monotone_constraints=None,
                                                  multi_strategy=None,
                                                  n_estimators=200, n_jobs=None,
                                                  num_parallel_tree=None, ...))
XGBRegressor(base_score=None, booster=None, callbacks=None,
             colsample_bylevel=None, colsample_bynode=None,
             colsample_bytree=1.0, device=None, early_stopping_rounds=None,
             enable_categorical=False, eval_metric=None, feature_types=None,
             feature_weights=None, gamma=None, grow_policy=None,
             importance_type=None, interaction_constraints=None,
             learning_rate=0.1, max_bin=None, max_cat_threshold=None,
             max_cat_to_onehot=None, max_delta_step=None, max_depth=5,
             max_leaves=None, min_child_weight=None, missing=nan,
             monotone_constraints=None, multi_strategy=None, n_estimators=200,
             n_jobs=None, num_parallel_tree=None, ...)
XGBRegressor(base_score=None, booster=None, callbacks=None,
             colsample_bylevel=None, colsample_bynode=None,
             colsample_bytree=1.0, device=None, early_stopping_rounds=None,
             enable_categorical=False, eval_metric=None, feature_types=None,
             feature_weights=None, gamma=None, grow_policy=None,
             importance_type=None, interaction_constraints=None,
             learning_rate=0.1, max_bin=None, max_cat_threshold=None,
             max_cat_to_onehot=None, max_delta_step=None, max_depth=5,
             max_leaves=None, min_child_weight=None, missing=nan,
             monotone_constraints=None, multi_strategy=None, n_estimators=200,
             n_jobs=None, num_parallel_tree=None, ...)

Let us now load the bike trip data for 2022. Since it follows the same format as the 2023 dataset, we apply preprocessing steps similar to those used for the 2023 data.

In [102]:
data_2022 = pd.read_csv('MVG_Rad_Fahrten_2022.csv', sep=';', decimal=',')
data_2022.columns = data_2022.columns.str.strip()
data_2022.drop(columns=['Row'], inplace=True)
data_2022.reset_index(names='trip_index', inplace=True)

for col in ['STARTTIME', 'ENDTIME']:
    data_2022[col] = pd.to_datetime(data_2022[col])

for col in [ 'STARTLAT', 'STARTLON', 'ENDLAT', 'ENDLON',  'RENTAL_STATION_NAME', 'RETURN_IS_STATION', 'RETURN_STATION_NAME']:
    data_2022[col] = data_2022[col].str.strip()

for col in [ 'STARTLAT', 'STARTLON', 'ENDLAT', 'ENDLON', ]:
    data_2022[col] = data_2022[col].astype(float)


data_2022.loc[data_2022['RENTAL_STATION_NAME'] == "Schwanthalerhöhe", 'STARTLON'] = 11.54453
data_2022.loc[data_2022['RETURN_STATION_NAME'] == "Schwanthalerhöhe", 'ENDLON'] = 11.54453

data_2022.loc[data_2022['RENTAL_STATION_NAME'] == "Schwanthalerhöhe", 'STARTLAT'] = 48.13363
data_2022.loc[data_2022['RETURN_STATION_NAME'] == "Schwanthalerhöhe", 'ENDLAT'] = 48.13363

ranges = {'STARTLAT': (47.9, 48.32), 'ENDLAT': (47.9, 48.32), 'STARTLON': (11.2, 12), 'ENDLON': (11.2, 12)}

query_str = ' & '.join([f'{col} >= {low} & {col} <= {high}' for col, (low, high) in ranges.items()])
data_2022_filtered = data_2022.query(query_str).copy()
data_2022_filtered
/var/folders/0j/hc4b4c153dzfbvkzcxfbtjv80000gn/T/ipykernel_1324/4087005322.py:1: DtypeWarning: Columns (9) have mixed types. Specify dtype option on import or set low_memory=False.
  data_2022 = pd.read_csv('MVG_Rad_Fahrten_2022.csv', sep=';', decimal=',')
Out[102]:
trip_index STARTTIME ENDTIME STARTLAT STARTLON ENDLAT ENDLON RENTAL_IS_STATION RENTAL_STATION_NAME RETURN_IS_STATION RETURN_STATION_NAME
0 0 2022-01-01 00:02:00 2022-01-01 00:09:00 48.13658 11.59283 48.14159 11.59721 0 NaN
1 1 2022-01-01 00:03:00 2022-01-01 00:07:00 48.13659 11.59282 48.14165 11.59701 0 NaN
2 2 2022-01-01 00:16:00 2022-01-01 00:29:00 48.16849 11.55028 48.15526 11.54012 0 NaN Albrechtstraße
3 3 2022-01-01 00:16:00 2022-01-01 00:29:00 48.16852 11.55020 48.15526 11.54012 0 NaN Albrechtstraße
5 5 2022-01-01 00:24:00 2022-01-01 00:35:00 48.16862 11.55234 48.15288 11.56669 0 NaN
... ... ... ... ... ... ... ... ... ... ... ...
709139 709139 2022-12-31 23:38:00 2022-12-31 23:48:00 48.16371 11.57453 48.16843 11.55567 0 NaN
709140 709140 2022-12-31 23:39:00 2023-01-01 00:56:00 48.15855 11.53505 48.17049 11.54825 0 NaN
709141 709141 2022-12-31 23:43:00 2022-12-31 23:57:00 48.15772 11.51174 48.17067 11.54825 0 NaN
709142 709142 2022-12-31 23:48:00 2022-12-31 23:56:00 48.14471 11.61595 48.14085 11.59835 0 NaN
709143 709143 2022-12-31 23:52:00 2023-01-01 16:36:00 47.99317 11.73690 47.99974 11.76324 1 Bahnhof Dürnhaar NaN

701865 rows × 11 columns

Let us now construct the test dataset, including both the features and the target variable.

In [103]:
strikes_dates_2022 = []
holiday_dates_2022 = pd.to_datetime([
    "2022-01-01",
    "2022-01-06",
    "2022-04-15",
    "2022-04-18",
    "2022-05-01",
    "2022-05-26",
    "2022-06-06",
    "2022-06-16",
    "2022-08-15",
    "2022-10-03",
    "2022-11-01",
    "2022-12-25",
    "2022-12-26"
]).date
In [104]:
weather_2022_df = get_weather_data("2022-01-01", "2023-01-01", 48.1374, 11.5755)
weather_2022_df
Coordinates 48.1195068359375°N 11.550000190734863°E
Elevation 524.0 m asl
Timezone b'Europe/Berlin' b'GMT+2'
Timezone difference to GMT+0 7200 s
Out[104]:
date temperature_2m precipitation wind_speed_10m
0 2021-12-31 22:00:00 9.221499 0.0 19.011953
1 2021-12-31 23:00:00 9.021500 0.0 19.011953
2 2022-01-01 00:00:00 8.621500 0.0 16.645864
3 2022-01-01 01:00:00 8.271500 0.0 15.111424
4 2022-01-01 02:00:00 7.721500 0.0 13.397612
... ... ... ... ...
8779 2023-01-01 17:00:00 8.771500 0.0 9.085988
8780 2023-01-01 18:00:00 8.771500 0.0 9.511088
8781 2023-01-01 19:00:00 6.971500 0.0 8.311245
8782 2023-01-01 20:00:00 5.521500 0.0 7.342588
8783 2023-01-01 21:00:00 4.921500 0.0 8.825508

8784 rows × 4 columns

In [105]:
starts_geometry_2022 = [Point(xy) for xy in zip(data_2022_filtered['STARTLON'], data_2022_filtered['STARTLAT'])]
In [106]:
test_data = prepare_data_for_modelling(grid, starts_geometry_2022, data_2022_filtered, weather_2022_df, holiday_dates_2022, strikes_dates_2022, '2022-01-01', '2023-01-01')
test_data
Out[106]:
date temperature_2m precipitation wind_speed_10m trip_start_count hour day_of_week month is_weekend_or_holiday is_strike
2 2022-01-01 00:00:00 8.6215 0.0 16.645864 3 0 5 1 1 0
3 2022-01-01 01:00:00 8.2715 0.0 15.111424 9 1 5 1 1 0
4 2022-01-01 02:00:00 7.7215 0.0 13.397612 15 2 5 1 1 0
5 2022-01-01 03:00:00 7.7715 0.0 14.917212 15 3 5 1 1 0
6 2022-01-01 04:00:00 7.8215 0.0 12.574260 8 4 5 1 1 0
... ... ... ... ... ... ... ... ... ... ...
8757 2022-12-31 19:00:00 8.2715 0.0 11.753877 20 19 5 12 1 0
8758 2022-12-31 20:00:00 7.9215 0.0 10.948973 12 20 5 12 1 0
8759 2022-12-31 21:00:00 7.4715 0.0 12.074766 5 21 5 12 1 0
8760 2022-12-31 22:00:00 7.4215 0.0 12.722830 10 22 5 12 1 0
8761 2022-12-31 23:00:00 7.6215 0.0 12.245293 8 23 5 12 1 0

8760 rows × 10 columns

In [107]:
X_test = test_data.drop(['trip_start_count', 'date'], axis=1)
y_test = test_data['trip_start_count']
In [108]:
test_score = best_model.score(X_test, y_test)
test_score
Out[108]:
0.7966654300689697

The R-squared score on previously unseen data is close to 80%, which can be considered a strong result, especially given that the model relies solely on simple, easily accessible features.

Let's plot the predicted and the actual trip counts in 2022.

In [109]:
y_pred = best_model.predict(X_test)
In [110]:
y_test_pred = pd.DataFrame({'y_test': y_test.to_numpy(), 'y_pred': y_pred, 'temperature': test_data['temperature_2m'].to_numpy(),
                           'precipitation': test_data['precipitation'].to_numpy()}, index=test_data['date'])
y_test_pred
Out[110]:
y_test y_pred temperature precipitation
date
2022-01-01 00:00:00 3 14.561603 8.6215 0.0
2022-01-01 01:00:00 9 16.955082 8.2715 0.0
2022-01-01 02:00:00 15 10.381706 7.7215 0.0
2022-01-01 03:00:00 15 9.806506 7.7715 0.0
2022-01-01 04:00:00 8 5.379688 7.8215 0.0
... ... ... ... ...
2022-12-31 19:00:00 20 9.164827 8.2715 0.0
2022-12-31 20:00:00 12 4.876100 7.9215 0.0
2022-12-31 21:00:00 5 4.277053 7.4715 0.0
2022-12-31 22:00:00 10 4.456097 7.4215 0.0
2022-12-31 23:00:00 8 4.412558 7.6215 0.0

8760 rows × 4 columns

In [111]:
fig = plt.figure(figsize=(15, 8));
sns.lineplot(y_test_pred[['y_test', 'y_pred']]);
plt.title('Predicted vs. Actual Trip Start Count in 2022 );
  Cell In[111], line 3
    plt.title('Predicted vs. Actual Trip Start Count in 2022 );
              ^
SyntaxError: unterminated string literal (detected at line 3)

The plot is quite dense, making it difficult to distinguish individual days. However, it is clearly visible that the predicted values frequently follow the local peaks of the actual trip count curve, indicating that the model is capturing important patterns in the data. Still, let's zoom in on a particular time frame to take a closer look at how the model likely makes its predictions and what patterns it responds to.

In [ ]:
prediction_showcase = y_test_pred.loc['2022-03-30': '2022-04-06', :]

fig, ax1 = plt.subplots(figsize=(15, 8));

# Plot regular series on left Y-axis
ax1.plot(prediction_showcase.index, prediction_showcase['y_test'], label='y_test', color='tab:blue');
ax1.plot(prediction_showcase.index, prediction_showcase['y_pred'], label='y_pred', color='tab:red');
ax1.plot(prediction_showcase.index, prediction_showcase['temperature'], label='temperature', color='tab:orange',  linestyle='--');
ax1.set_ylabel("Trip count, temperature");
ax1.tick_params(axis='y');
ax1.legend(loc='upper left');

# Create right Y-axis
ax2 = ax1.twinx();
ax2.plot(prediction_showcase.index, prediction_showcase['precipitation'], label='precipitation', color='tab:green', linestyle='--');
ax2.set_ylabel("Precipitation");
ax2.tick_params(axis='y', labelcolor='tab:green');
ax2.legend(loc='upper right');

plt.title("Predicted vs Actual Trip Counts with Key Features");
plt.tight_layout();
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.9s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.9s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.8s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.9s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.9s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.7s

[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.9s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.9s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.8s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.9s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.9s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.7s

[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.9s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.8s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.8s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.9s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.9s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.8s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.7s

[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.9s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.8s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.8s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.9s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.9s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.6s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.7s

[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.9s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.8s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.9s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.8s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.9s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.7s

[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.9s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.8s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.9s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.9s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.8s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.7s

[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.9s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.8s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.9s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.9s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.9s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.6s

[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.9s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.8s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.9s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.8s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.01, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.9s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.1s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=3, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.2s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.3s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=5, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.3s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=100, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.4s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=1, regressor__subsample=1.0; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=1, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=0.8; total time=   0.7s
[CV] END regressor__colsample_bytree=1.0, regressor__learning_rate=0.1, regressor__max_depth=7, regressor__n_estimators=200, regressor__reg_alpha=0.1, regressor__reg_lambda=10, regressor__subsample=1.0; total time=   0.7s

We observe that both the actual and predicted number of trips dropped significantly on 1 April 2022, remained relatively low until 4 April, and began to recover on 5 April. A likely explanation for this pattern is the heavy precipitation between 1 and 3 April, accompanied by a drop in temperature, which reached nearly 0°C on 3 April. As the temperature began to rise and precipitation declined, the trip counts also recovered — a response the model appears to capture well. Additionally, 4 April marked the beginning of a new week, which may have contributed to increased demand due to commuting patterns.

In [ ]: